00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2231 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3490 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.179 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.255 > git --version # 'git version 2.39.2' 00:00:00.255 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.948 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.960 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.970 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:05.970 > git config core.sparsecheckout # timeout=10 00:00:05.980 > git read-tree -mu HEAD # timeout=10 00:00:05.994 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:06.010 Commit message: "kid: add issue 3541" 00:00:06.010 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:06.109 [Pipeline] Start of Pipeline 00:00:06.122 [Pipeline] library 00:00:06.124 Loading library shm_lib@master 00:00:06.124 Library shm_lib@master is cached. Copying from home. 00:00:06.142 [Pipeline] node 00:00:06.158 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.160 [Pipeline] { 00:00:06.171 [Pipeline] catchError 00:00:06.172 [Pipeline] { 00:00:06.186 [Pipeline] wrap 00:00:06.196 [Pipeline] { 00:00:06.204 [Pipeline] stage 00:00:06.205 [Pipeline] { (Prologue) 00:00:06.224 [Pipeline] echo 00:00:06.226 Node: VM-host-SM9 00:00:06.233 [Pipeline] cleanWs 00:00:06.242 [WS-CLEANUP] Deleting project workspace... 00:00:06.242 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.248 [WS-CLEANUP] done 00:00:06.435 [Pipeline] setCustomBuildProperty 00:00:06.509 [Pipeline] httpRequest 00:00:06.855 [Pipeline] echo 00:00:06.857 Sorcerer 10.211.164.101 is alive 00:00:06.866 [Pipeline] retry 00:00:06.868 [Pipeline] { 00:00:06.882 [Pipeline] httpRequest 00:00:06.886 HttpMethod: GET 00:00:06.887 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:06.887 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:06.888 Response Code: HTTP/1.1 200 OK 00:00:06.889 Success: Status code 200 is in the accepted range: 200,404 00:00:06.889 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:07.946 [Pipeline] } 00:00:07.964 [Pipeline] // retry 00:00:07.972 [Pipeline] sh 00:00:08.259 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:08.272 [Pipeline] httpRequest 00:00:09.219 [Pipeline] echo 00:00:09.221 Sorcerer 10.211.164.101 is alive 00:00:09.229 [Pipeline] retry 00:00:09.231 [Pipeline] { 00:00:09.245 [Pipeline] httpRequest 00:00:09.249 HttpMethod: GET 00:00:09.250 URL: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:09.251 Sending request to url: http://10.211.164.101/packages/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:00:09.271 Response Code: HTTP/1.1 200 OK 00:00:09.272 Success: Status code 200 is in the accepted range: 200,404 00:00:09.273 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:16.648 [Pipeline] } 00:01:16.666 [Pipeline] // retry 00:01:16.675 [Pipeline] sh 00:01:16.955 + tar --no-same-owner -xf spdk_726a04d705a30cca40ac8dc8d45f839602005b7a.tar.gz 00:01:19.500 [Pipeline] sh 00:01:19.779 + git -C spdk log --oneline -n5 00:01:19.779 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:19.779 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:19.779 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:19.779 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:19.779 9469ea403 nvme/fio_plugin: add trim support 00:01:19.800 [Pipeline] writeFile 00:01:19.817 [Pipeline] sh 00:01:20.100 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.112 [Pipeline] sh 00:01:20.397 + cat autorun-spdk.conf 00:01:20.397 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.397 SPDK_TEST_NVMF=1 00:01:20.397 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.397 SPDK_TEST_URING=1 00:01:20.397 SPDK_TEST_VFIOUSER=1 00:01:20.397 SPDK_TEST_USDT=1 00:01:20.397 SPDK_RUN_UBSAN=1 00:01:20.397 NET_TYPE=virt 00:01:20.397 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.404 RUN_NIGHTLY=1 00:01:20.406 [Pipeline] } 00:01:20.423 [Pipeline] // stage 00:01:20.439 [Pipeline] stage 00:01:20.441 [Pipeline] { (Run VM) 00:01:20.453 [Pipeline] sh 00:01:20.734 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.734 + echo 'Start stage prepare_nvme.sh' 00:01:20.734 Start stage prepare_nvme.sh 00:01:20.734 + [[ -n 1 ]] 00:01:20.734 + disk_prefix=ex1 00:01:20.734 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:20.734 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:20.734 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:20.734 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.734 ++ SPDK_TEST_NVMF=1 00:01:20.734 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.734 ++ SPDK_TEST_URING=1 00:01:20.734 ++ SPDK_TEST_VFIOUSER=1 00:01:20.734 ++ SPDK_TEST_USDT=1 00:01:20.734 ++ SPDK_RUN_UBSAN=1 00:01:20.734 ++ NET_TYPE=virt 00:01:20.734 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.734 ++ RUN_NIGHTLY=1 00:01:20.734 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:20.734 + nvme_files=() 00:01:20.734 + declare -A nvme_files 00:01:20.734 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.734 + nvme_files['nvme.img']=5G 00:01:20.734 + nvme_files['nvme-cmb.img']=5G 00:01:20.734 + nvme_files['nvme-multi0.img']=4G 00:01:20.734 + nvme_files['nvme-multi1.img']=4G 00:01:20.734 + nvme_files['nvme-multi2.img']=4G 00:01:20.734 + nvme_files['nvme-openstack.img']=8G 00:01:20.734 + nvme_files['nvme-zns.img']=5G 00:01:20.734 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.734 + (( SPDK_TEST_FTL == 1 )) 00:01:20.734 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.734 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:20.734 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:20.734 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:20.734 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:20.734 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:20.734 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:20.734 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.734 + for nvme in "${!nvme_files[@]}" 00:01:20.734 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:20.993 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.993 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:20.993 + echo 'End stage prepare_nvme.sh' 00:01:20.993 End stage prepare_nvme.sh 00:01:21.004 [Pipeline] sh 00:01:21.283 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.283 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:21.283 00:01:21.283 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:21.283 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:21.283 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:21.283 HELP=0 00:01:21.283 DRY_RUN=0 00:01:21.283 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:21.283 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.283 NVME_AUTO_CREATE=0 00:01:21.283 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:21.283 NVME_CMB=,, 00:01:21.283 NVME_PMR=,, 00:01:21.283 NVME_ZNS=,, 00:01:21.283 NVME_MS=,, 00:01:21.283 NVME_FDP=,, 00:01:21.284 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.284 SPDK_VAGRANT_VMCPU=10 00:01:21.284 SPDK_VAGRANT_VMRAM=12288 00:01:21.284 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.284 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.284 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.284 SPDK_OPENSTACK_NETWORK=0 00:01:21.284 VAGRANT_PACKAGE_BOX=0 00:01:21.284 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.284 FORCE_DISTRO=true 00:01:21.284 VAGRANT_BOX_VERSION= 00:01:21.284 EXTRA_VAGRANTFILES= 00:01:21.284 NIC_MODEL=e1000 00:01:21.284 00:01:21.284 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:21.284 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:24.570 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.570 ==> default: Creating image (snapshot of base box volume). 00:01:24.828 ==> default: Creating domain with the following settings... 00:01:24.828 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727568820_1cbd73a9492aa6912f0a 00:01:24.828 ==> default: -- Domain type: kvm 00:01:24.828 ==> default: -- Cpus: 10 00:01:24.828 ==> default: -- Feature: acpi 00:01:24.828 ==> default: -- Feature: apic 00:01:24.828 ==> default: -- Feature: pae 00:01:24.828 ==> default: -- Memory: 12288M 00:01:24.828 ==> default: -- Memory Backing: hugepages: 00:01:24.828 ==> default: -- Management MAC: 00:01:24.828 ==> default: -- Loader: 00:01:24.828 ==> default: -- Nvram: 00:01:24.828 ==> default: -- Base box: spdk/fedora39 00:01:24.828 ==> default: -- Storage pool: default 00:01:24.828 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727568820_1cbd73a9492aa6912f0a.img (20G) 00:01:24.828 ==> default: -- Volume Cache: default 00:01:24.828 ==> default: -- Kernel: 00:01:24.828 ==> default: -- Initrd: 00:01:24.828 ==> default: -- Graphics Type: vnc 00:01:24.828 ==> default: -- Graphics Port: -1 00:01:24.828 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.828 ==> default: -- Graphics Password: Not defined 00:01:24.828 ==> default: -- Video Type: cirrus 00:01:24.828 ==> default: -- Video VRAM: 9216 00:01:24.828 ==> default: -- Sound Type: 00:01:24.828 ==> default: -- Keymap: en-us 00:01:24.828 ==> default: -- TPM Path: 00:01:24.828 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.829 ==> default: -- Command line args: 00:01:24.829 ==> default: -> value=-device, 00:01:24.829 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:24.829 ==> default: -> value=-drive, 00:01:24.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:24.829 ==> default: -> value=-device, 00:01:24.829 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.829 ==> default: -> value=-device, 00:01:24.829 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:24.829 ==> default: -> value=-drive, 00:01:24.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:24.829 ==> default: -> value=-device, 00:01:24.829 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.829 ==> default: -> value=-drive, 00:01:24.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:24.829 ==> default: -> value=-device, 00:01:24.829 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.829 ==> default: -> value=-drive, 00:01:24.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:24.829 ==> default: -> value=-device, 00:01:24.829 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.829 ==> default: Creating shared folders metadata... 00:01:24.829 ==> default: Starting domain. 00:01:26.214 ==> default: Waiting for domain to get an IP address... 00:01:44.297 ==> default: Waiting for SSH to become available... 00:01:44.297 ==> default: Configuring and enabling network interfaces... 00:01:46.840 default: SSH address: 192.168.121.53:22 00:01:46.840 default: SSH username: vagrant 00:01:46.840 default: SSH auth method: private key 00:01:48.768 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.881 ==> default: Mounting SSHFS shared folder... 00:01:57.818 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.818 ==> default: Checking Mount.. 00:01:58.754 ==> default: Folder Successfully Mounted! 00:01:58.754 ==> default: Running provisioner: file... 00:01:59.734 default: ~/.gitconfig => .gitconfig 00:02:00.301 00:02:00.301 SUCCESS! 00:02:00.301 00:02:00.301 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.301 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.301 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.301 00:02:00.310 [Pipeline] } 00:02:00.325 [Pipeline] // stage 00:02:00.334 [Pipeline] dir 00:02:00.335 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:00.336 [Pipeline] { 00:02:00.351 [Pipeline] catchError 00:02:00.353 [Pipeline] { 00:02:00.365 [Pipeline] sh 00:02:00.644 + vagrant ssh-config --host vagrant 00:02:00.644 + sed -ne /^Host/,$p 00:02:00.644 + tee ssh_conf 00:02:03.931 Host vagrant 00:02:03.931 HostName 192.168.121.53 00:02:03.931 User vagrant 00:02:03.931 Port 22 00:02:03.931 UserKnownHostsFile /dev/null 00:02:03.931 StrictHostKeyChecking no 00:02:03.931 PasswordAuthentication no 00:02:03.931 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:03.931 IdentitiesOnly yes 00:02:03.931 LogLevel FATAL 00:02:03.931 ForwardAgent yes 00:02:03.931 ForwardX11 yes 00:02:03.931 00:02:03.943 [Pipeline] withEnv 00:02:03.945 [Pipeline] { 00:02:03.958 [Pipeline] sh 00:02:04.236 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:04.236 source /etc/os-release 00:02:04.236 [[ -e /image.version ]] && img=$(< /image.version) 00:02:04.236 # Minimal, systemd-like check. 00:02:04.236 if [[ -e /.dockerenv ]]; then 00:02:04.236 # Clear garbage from the node's name: 00:02:04.236 # agt-er_autotest_547-896 -> autotest_547-896 00:02:04.236 # $HOSTNAME is the actual container id 00:02:04.236 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:04.236 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:04.236 # We can assume this is a mount from a host where container is running, 00:02:04.236 # so fetch its hostname to easily identify the target swarm worker. 00:02:04.236 container="$(< /etc/hostname) ($agent)" 00:02:04.236 else 00:02:04.236 # Fallback 00:02:04.236 container=$agent 00:02:04.236 fi 00:02:04.236 fi 00:02:04.236 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:04.236 00:02:04.507 [Pipeline] } 00:02:04.528 [Pipeline] // withEnv 00:02:04.538 [Pipeline] setCustomBuildProperty 00:02:04.559 [Pipeline] stage 00:02:04.562 [Pipeline] { (Tests) 00:02:04.584 [Pipeline] sh 00:02:04.862 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.133 [Pipeline] sh 00:02:05.413 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:05.427 [Pipeline] timeout 00:02:05.428 Timeout set to expire in 1 hr 0 min 00:02:05.430 [Pipeline] { 00:02:05.445 [Pipeline] sh 00:02:05.739 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:06.316 HEAD is now at 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:02:06.328 [Pipeline] sh 00:02:06.607 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:06.879 [Pipeline] sh 00:02:07.160 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:07.432 [Pipeline] sh 00:02:07.711 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:07.969 ++ readlink -f spdk_repo 00:02:07.969 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:07.969 + [[ -n /home/vagrant/spdk_repo ]] 00:02:07.969 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:07.969 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:07.969 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:07.969 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:07.969 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:07.969 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:07.969 + cd /home/vagrant/spdk_repo 00:02:07.969 + source /etc/os-release 00:02:07.969 ++ NAME='Fedora Linux' 00:02:07.969 ++ VERSION='39 (Cloud Edition)' 00:02:07.969 ++ ID=fedora 00:02:07.969 ++ VERSION_ID=39 00:02:07.970 ++ VERSION_CODENAME= 00:02:07.970 ++ PLATFORM_ID=platform:f39 00:02:07.970 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:07.970 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:07.970 ++ LOGO=fedora-logo-icon 00:02:07.970 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:07.970 ++ HOME_URL=https://fedoraproject.org/ 00:02:07.970 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:07.970 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:07.970 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:07.970 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:07.970 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:07.970 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:07.970 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:07.970 ++ SUPPORT_END=2024-11-12 00:02:07.970 ++ VARIANT='Cloud Edition' 00:02:07.970 ++ VARIANT_ID=cloud 00:02:07.970 + uname -a 00:02:07.970 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:07.970 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:07.970 Hugepages 00:02:07.970 node hugesize free / total 00:02:07.970 node0 1048576kB 0 / 0 00:02:07.970 node0 2048kB 0 / 0 00:02:07.970 00:02:07.970 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.970 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:07.970 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:07.970 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:07.970 + rm -f /tmp/spdk-ld-path 00:02:07.970 + source autorun-spdk.conf 00:02:07.970 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.970 ++ SPDK_TEST_NVMF=1 00:02:07.970 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.970 ++ SPDK_TEST_URING=1 00:02:07.970 ++ SPDK_TEST_VFIOUSER=1 00:02:07.970 ++ SPDK_TEST_USDT=1 00:02:07.970 ++ SPDK_RUN_UBSAN=1 00:02:07.970 ++ NET_TYPE=virt 00:02:07.970 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.970 ++ RUN_NIGHTLY=1 00:02:07.970 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.970 + [[ -n '' ]] 00:02:07.970 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:08.229 + for M in /var/spdk/build-*-manifest.txt 00:02:08.229 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.229 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.229 + for M in /var/spdk/build-*-manifest.txt 00:02:08.229 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.229 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.229 + for M in /var/spdk/build-*-manifest.txt 00:02:08.229 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.229 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:08.229 ++ uname 00:02:08.229 + [[ Linux == \L\i\n\u\x ]] 00:02:08.229 + sudo dmesg -T 00:02:08.229 + sudo dmesg --clear 00:02:08.229 + dmesg_pid=5233 00:02:08.229 + sudo dmesg -Tw 00:02:08.229 + [[ Fedora Linux == FreeBSD ]] 00:02:08.229 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.229 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.229 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.229 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.229 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.229 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.229 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.229 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.229 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.229 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.229 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.229 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.229 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.229 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.229 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:08.229 Test configuration: 00:02:08.229 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.229 SPDK_TEST_NVMF=1 00:02:08.229 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.229 SPDK_TEST_URING=1 00:02:08.229 SPDK_TEST_VFIOUSER=1 00:02:08.229 SPDK_TEST_USDT=1 00:02:08.229 SPDK_RUN_UBSAN=1 00:02:08.229 NET_TYPE=virt 00:02:08.229 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:08.229 RUN_NIGHTLY=1 00:14:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:08.229 00:14:23 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.229 00:14:23 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.229 00:14:23 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.229 00:14:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.229 00:14:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.229 00:14:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.229 00:14:23 -- paths/export.sh@5 -- $ export PATH 00:02:08.229 00:14:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.229 00:14:23 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:08.229 00:14:23 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:08.229 00:14:23 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727568863.XXXXXX 00:02:08.229 00:14:24 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727568863.3YrlaO 00:02:08.229 00:14:24 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:08.229 00:14:24 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:08.229 00:14:24 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:08.229 00:14:24 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:08.229 00:14:24 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.229 00:14:24 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:08.229 00:14:24 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:08.229 00:14:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.229 00:14:24 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:08.229 00:14:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.229 00:14:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.229 00:14:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:08.229 00:14:24 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.229 Sun Sep 29 12:14:24 AM UTC 2024 00:02:08.229 00:14:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.229 LTS-66-g726a04d70 00:02:08.229 00:14:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.230 00:14:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.230 00:14:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.230 00:14:24 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:08.230 00:14:24 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:08.230 00:14:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.230 ************************************ 00:02:08.230 START TEST ubsan 00:02:08.230 ************************************ 00:02:08.230 using ubsan 00:02:08.230 00:14:24 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:08.230 00:02:08.230 real 0m0.000s 00:02:08.230 user 0m0.000s 00:02:08.230 sys 0m0.000s 00:02:08.230 ************************************ 00:02:08.230 END TEST ubsan 00:02:08.230 ************************************ 00:02:08.230 00:14:24 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:08.230 00:14:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.489 00:14:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.489 00:14:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.489 00:14:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.489 00:14:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.489 00:14:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.489 00:14:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.489 00:14:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.489 00:14:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.489 00:14:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:08.489 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.489 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.058 Using 'verbs' RDMA provider 00:02:21.833 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:36.712 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:36.712 Creating mk/config.mk...done. 00:02:36.712 Creating mk/cc.flags.mk...done. 00:02:36.712 Type 'make' to build. 00:02:36.712 00:14:50 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:36.712 00:14:50 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:36.712 00:14:50 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:36.712 00:14:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.712 ************************************ 00:02:36.712 START TEST make 00:02:36.712 ************************************ 00:02:36.712 00:14:50 -- common/autotest_common.sh@1104 -- $ make -j10 00:02:36.712 make[1]: Nothing to be done for 'all'. 00:02:36.712 The Meson build system 00:02:36.712 Version: 1.5.0 00:02:36.712 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:36.712 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:36.712 Build type: native build 00:02:36.712 Project name: libvfio-user 00:02:36.712 Project version: 0.0.1 00:02:36.712 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.712 C linker for the host machine: cc ld.bfd 2.40-14 00:02:36.712 Host machine cpu family: x86_64 00:02:36.712 Host machine cpu: x86_64 00:02:36.712 Run-time dependency threads found: YES 00:02:36.712 Library dl found: YES 00:02:36.712 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.712 Run-time dependency json-c found: YES 0.17 00:02:36.712 Run-time dependency cmocka found: YES 1.1.7 00:02:36.712 Program pytest-3 found: NO 00:02:36.712 Program flake8 found: NO 00:02:36.712 Program misspell-fixer found: NO 00:02:36.712 Program restructuredtext-lint found: NO 00:02:36.712 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.712 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.712 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.713 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.713 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.713 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.713 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.713 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.713 Build targets in project: 8 00:02:36.713 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.713 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.713 00:02:36.713 libvfio-user 0.0.1 00:02:36.713 00:02:36.713 User defined options 00:02:36.713 buildtype : debug 00:02:36.713 default_library: shared 00:02:36.713 libdir : /usr/local/lib 00:02:36.713 00:02:36.713 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.972 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:37.231 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:37.231 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:37.231 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:37.231 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:37.231 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:37.231 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:37.231 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:37.231 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:37.231 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:37.489 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:37.489 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:37.489 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:37.489 [13/37] Compiling C object samples/null.p/null.c.o 00:02:37.489 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:37.489 [15/37] Compiling C object samples/server.p/server.c.o 00:02:37.489 [16/37] Compiling C object samples/client.p/client.c.o 00:02:37.489 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:37.489 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:37.489 [19/37] Linking target samples/client 00:02:37.489 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:37.489 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:37.489 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:37.489 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:37.489 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:37.489 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:37.489 [26/37] Linking target lib/libvfio-user.so.0.0.1 00:02:37.747 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:37.747 [28/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:37.747 [29/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:37.747 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:37.747 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.747 [32/37] Linking target samples/gpio-pci-idio-16 00:02:37.747 [33/37] Linking target samples/server 00:02:37.747 [34/37] Linking target samples/null 00:02:37.747 [35/37] Linking target samples/lspci 00:02:38.005 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:38.005 [37/37] Linking target test/unit_tests 00:02:38.005 INFO: autodetecting backend as ninja 00:02:38.005 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.005 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:38.570 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:38.570 ninja: no work to do. 00:02:46.702 The Meson build system 00:02:46.702 Version: 1.5.0 00:02:46.702 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:46.702 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:46.702 Build type: native build 00:02:46.702 Program cat found: YES (/usr/bin/cat) 00:02:46.702 Project name: DPDK 00:02:46.702 Project version: 23.11.0 00:02:46.702 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:46.702 C linker for the host machine: cc ld.bfd 2.40-14 00:02:46.702 Host machine cpu family: x86_64 00:02:46.702 Host machine cpu: x86_64 00:02:46.702 Message: ## Building in Developer Mode ## 00:02:46.702 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:46.702 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:46.702 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.702 Program python3 found: YES (/usr/bin/python3) 00:02:46.702 Program cat found: YES (/usr/bin/cat) 00:02:46.702 Compiler for C supports arguments -march=native: YES 00:02:46.702 Checking for size of "void *" : 8 00:02:46.702 Checking for size of "void *" : 8 (cached) 00:02:46.702 Library m found: YES 00:02:46.702 Library numa found: YES 00:02:46.702 Has header "numaif.h" : YES 00:02:46.702 Library fdt found: NO 00:02:46.702 Library execinfo found: NO 00:02:46.702 Has header "execinfo.h" : YES 00:02:46.702 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:46.702 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.702 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.702 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.702 Run-time dependency openssl found: YES 3.1.1 00:02:46.702 Run-time dependency libpcap found: YES 1.10.4 00:02:46.702 Has header "pcap.h" with dependency libpcap: YES 00:02:46.702 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.702 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.702 Compiler for C supports arguments -Wformat: YES 00:02:46.702 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:46.702 Compiler for C supports arguments -Wformat-security: NO 00:02:46.702 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.702 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.702 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.702 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.702 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.702 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.702 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.702 Compiler for C supports arguments -Wundef: YES 00:02:46.702 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.702 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.702 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:46.702 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.702 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:46.702 Program objdump found: YES (/usr/bin/objdump) 00:02:46.702 Compiler for C supports arguments -mavx512f: YES 00:02:46.702 Checking if "AVX512 checking" compiles: YES 00:02:46.702 Fetching value of define "__SSE4_2__" : 1 00:02:46.702 Fetching value of define "__AES__" : 1 00:02:46.702 Fetching value of define "__AVX__" : 1 00:02:46.702 Fetching value of define "__AVX2__" : 1 00:02:46.702 Fetching value of define "__AVX512BW__" : (undefined) 00:02:46.702 Fetching value of define "__AVX512CD__" : (undefined) 00:02:46.702 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:46.702 Fetching value of define "__AVX512F__" : (undefined) 00:02:46.702 Fetching value of define "__AVX512VL__" : (undefined) 00:02:46.702 Fetching value of define "__PCLMUL__" : 1 00:02:46.702 Fetching value of define "__RDRND__" : 1 00:02:46.702 Fetching value of define "__RDSEED__" : 1 00:02:46.702 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:46.702 Fetching value of define "__znver1__" : (undefined) 00:02:46.702 Fetching value of define "__znver2__" : (undefined) 00:02:46.702 Fetching value of define "__znver3__" : (undefined) 00:02:46.702 Fetching value of define "__znver4__" : (undefined) 00:02:46.702 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:46.702 Message: lib/log: Defining dependency "log" 00:02:46.702 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.702 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.702 Checking for function "getentropy" : NO 00:02:46.702 Message: lib/eal: Defining dependency "eal" 00:02:46.702 Message: lib/ring: Defining dependency "ring" 00:02:46.702 Message: lib/rcu: Defining dependency "rcu" 00:02:46.702 Message: lib/mempool: Defining dependency "mempool" 00:02:46.702 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.702 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.702 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:46.702 Compiler for C supports arguments -mpclmul: YES 00:02:46.702 Compiler for C supports arguments -maes: YES 00:02:46.702 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.702 Compiler for C supports arguments -mavx512bw: YES 00:02:46.702 Compiler for C supports arguments -mavx512dq: YES 00:02:46.702 Compiler for C supports arguments -mavx512vl: YES 00:02:46.702 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.702 Compiler for C supports arguments -mavx2: YES 00:02:46.702 Compiler for C supports arguments -mavx: YES 00:02:46.702 Message: lib/net: Defining dependency "net" 00:02:46.702 Message: lib/meter: Defining dependency "meter" 00:02:46.702 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.702 Message: lib/pci: Defining dependency "pci" 00:02:46.702 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.702 Message: lib/hash: Defining dependency "hash" 00:02:46.702 Message: lib/timer: Defining dependency "timer" 00:02:46.702 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.702 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.702 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.702 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.702 Message: lib/power: Defining dependency "power" 00:02:46.702 Message: lib/reorder: Defining dependency "reorder" 00:02:46.702 Message: lib/security: Defining dependency "security" 00:02:46.702 Has header "linux/userfaultfd.h" : YES 00:02:46.702 Has header "linux/vduse.h" : YES 00:02:46.702 Message: lib/vhost: Defining dependency "vhost" 00:02:46.702 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:46.702 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.702 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.702 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.702 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:46.702 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:46.702 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:46.702 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:46.702 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:46.702 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:46.702 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.702 Configuring doxy-api-html.conf using configuration 00:02:46.702 Configuring doxy-api-man.conf using configuration 00:02:46.702 Program mandb found: YES (/usr/bin/mandb) 00:02:46.702 Program sphinx-build found: NO 00:02:46.702 Configuring rte_build_config.h using configuration 00:02:46.702 Message: 00:02:46.702 ================= 00:02:46.702 Applications Enabled 00:02:46.702 ================= 00:02:46.702 00:02:46.702 apps: 00:02:46.702 00:02:46.702 00:02:46.702 Message: 00:02:46.702 ================= 00:02:46.702 Libraries Enabled 00:02:46.702 ================= 00:02:46.702 00:02:46.702 libs: 00:02:46.702 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:46.702 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:46.702 cryptodev, dmadev, power, reorder, security, vhost, 00:02:46.702 00:02:46.702 Message: 00:02:46.702 =============== 00:02:46.702 Drivers Enabled 00:02:46.702 =============== 00:02:46.702 00:02:46.702 common: 00:02:46.702 00:02:46.702 bus: 00:02:46.702 pci, vdev, 00:02:46.702 mempool: 00:02:46.703 ring, 00:02:46.703 dma: 00:02:46.703 00:02:46.703 net: 00:02:46.703 00:02:46.703 crypto: 00:02:46.703 00:02:46.703 compress: 00:02:46.703 00:02:46.703 vdpa: 00:02:46.703 00:02:46.703 00:02:46.703 Message: 00:02:46.703 ================= 00:02:46.703 Content Skipped 00:02:46.703 ================= 00:02:46.703 00:02:46.703 apps: 00:02:46.703 dumpcap: explicitly disabled via build config 00:02:46.703 graph: explicitly disabled via build config 00:02:46.703 pdump: explicitly disabled via build config 00:02:46.703 proc-info: explicitly disabled via build config 00:02:46.703 test-acl: explicitly disabled via build config 00:02:46.703 test-bbdev: explicitly disabled via build config 00:02:46.703 test-cmdline: explicitly disabled via build config 00:02:46.703 test-compress-perf: explicitly disabled via build config 00:02:46.703 test-crypto-perf: explicitly disabled via build config 00:02:46.703 test-dma-perf: explicitly disabled via build config 00:02:46.703 test-eventdev: explicitly disabled via build config 00:02:46.703 test-fib: explicitly disabled via build config 00:02:46.703 test-flow-perf: explicitly disabled via build config 00:02:46.703 test-gpudev: explicitly disabled via build config 00:02:46.703 test-mldev: explicitly disabled via build config 00:02:46.703 test-pipeline: explicitly disabled via build config 00:02:46.703 test-pmd: explicitly disabled via build config 00:02:46.703 test-regex: explicitly disabled via build config 00:02:46.703 test-sad: explicitly disabled via build config 00:02:46.703 test-security-perf: explicitly disabled via build config 00:02:46.703 00:02:46.703 libs: 00:02:46.703 metrics: explicitly disabled via build config 00:02:46.703 acl: explicitly disabled via build config 00:02:46.703 bbdev: explicitly disabled via build config 00:02:46.703 bitratestats: explicitly disabled via build config 00:02:46.703 bpf: explicitly disabled via build config 00:02:46.703 cfgfile: explicitly disabled via build config 00:02:46.703 distributor: explicitly disabled via build config 00:02:46.703 efd: explicitly disabled via build config 00:02:46.703 eventdev: explicitly disabled via build config 00:02:46.703 dispatcher: explicitly disabled via build config 00:02:46.703 gpudev: explicitly disabled via build config 00:02:46.703 gro: explicitly disabled via build config 00:02:46.703 gso: explicitly disabled via build config 00:02:46.703 ip_frag: explicitly disabled via build config 00:02:46.703 jobstats: explicitly disabled via build config 00:02:46.703 latencystats: explicitly disabled via build config 00:02:46.703 lpm: explicitly disabled via build config 00:02:46.703 member: explicitly disabled via build config 00:02:46.703 pcapng: explicitly disabled via build config 00:02:46.703 rawdev: explicitly disabled via build config 00:02:46.703 regexdev: explicitly disabled via build config 00:02:46.703 mldev: explicitly disabled via build config 00:02:46.703 rib: explicitly disabled via build config 00:02:46.703 sched: explicitly disabled via build config 00:02:46.703 stack: explicitly disabled via build config 00:02:46.703 ipsec: explicitly disabled via build config 00:02:46.703 pdcp: explicitly disabled via build config 00:02:46.703 fib: explicitly disabled via build config 00:02:46.703 port: explicitly disabled via build config 00:02:46.703 pdump: explicitly disabled via build config 00:02:46.703 table: explicitly disabled via build config 00:02:46.703 pipeline: explicitly disabled via build config 00:02:46.703 graph: explicitly disabled via build config 00:02:46.703 node: explicitly disabled via build config 00:02:46.703 00:02:46.703 drivers: 00:02:46.703 common/cpt: not in enabled drivers build config 00:02:46.703 common/dpaax: not in enabled drivers build config 00:02:46.703 common/iavf: not in enabled drivers build config 00:02:46.703 common/idpf: not in enabled drivers build config 00:02:46.703 common/mvep: not in enabled drivers build config 00:02:46.703 common/octeontx: not in enabled drivers build config 00:02:46.703 bus/auxiliary: not in enabled drivers build config 00:02:46.703 bus/cdx: not in enabled drivers build config 00:02:46.703 bus/dpaa: not in enabled drivers build config 00:02:46.703 bus/fslmc: not in enabled drivers build config 00:02:46.703 bus/ifpga: not in enabled drivers build config 00:02:46.703 bus/platform: not in enabled drivers build config 00:02:46.703 bus/vmbus: not in enabled drivers build config 00:02:46.703 common/cnxk: not in enabled drivers build config 00:02:46.703 common/mlx5: not in enabled drivers build config 00:02:46.703 common/nfp: not in enabled drivers build config 00:02:46.703 common/qat: not in enabled drivers build config 00:02:46.703 common/sfc_efx: not in enabled drivers build config 00:02:46.703 mempool/bucket: not in enabled drivers build config 00:02:46.703 mempool/cnxk: not in enabled drivers build config 00:02:46.703 mempool/dpaa: not in enabled drivers build config 00:02:46.703 mempool/dpaa2: not in enabled drivers build config 00:02:46.703 mempool/octeontx: not in enabled drivers build config 00:02:46.703 mempool/stack: not in enabled drivers build config 00:02:46.703 dma/cnxk: not in enabled drivers build config 00:02:46.703 dma/dpaa: not in enabled drivers build config 00:02:46.703 dma/dpaa2: not in enabled drivers build config 00:02:46.703 dma/hisilicon: not in enabled drivers build config 00:02:46.703 dma/idxd: not in enabled drivers build config 00:02:46.703 dma/ioat: not in enabled drivers build config 00:02:46.703 dma/skeleton: not in enabled drivers build config 00:02:46.703 net/af_packet: not in enabled drivers build config 00:02:46.703 net/af_xdp: not in enabled drivers build config 00:02:46.703 net/ark: not in enabled drivers build config 00:02:46.703 net/atlantic: not in enabled drivers build config 00:02:46.703 net/avp: not in enabled drivers build config 00:02:46.703 net/axgbe: not in enabled drivers build config 00:02:46.703 net/bnx2x: not in enabled drivers build config 00:02:46.703 net/bnxt: not in enabled drivers build config 00:02:46.703 net/bonding: not in enabled drivers build config 00:02:46.703 net/cnxk: not in enabled drivers build config 00:02:46.703 net/cpfl: not in enabled drivers build config 00:02:46.703 net/cxgbe: not in enabled drivers build config 00:02:46.703 net/dpaa: not in enabled drivers build config 00:02:46.703 net/dpaa2: not in enabled drivers build config 00:02:46.703 net/e1000: not in enabled drivers build config 00:02:46.703 net/ena: not in enabled drivers build config 00:02:46.703 net/enetc: not in enabled drivers build config 00:02:46.703 net/enetfec: not in enabled drivers build config 00:02:46.703 net/enic: not in enabled drivers build config 00:02:46.703 net/failsafe: not in enabled drivers build config 00:02:46.703 net/fm10k: not in enabled drivers build config 00:02:46.703 net/gve: not in enabled drivers build config 00:02:46.703 net/hinic: not in enabled drivers build config 00:02:46.703 net/hns3: not in enabled drivers build config 00:02:46.703 net/i40e: not in enabled drivers build config 00:02:46.703 net/iavf: not in enabled drivers build config 00:02:46.703 net/ice: not in enabled drivers build config 00:02:46.703 net/idpf: not in enabled drivers build config 00:02:46.703 net/igc: not in enabled drivers build config 00:02:46.703 net/ionic: not in enabled drivers build config 00:02:46.703 net/ipn3ke: not in enabled drivers build config 00:02:46.703 net/ixgbe: not in enabled drivers build config 00:02:46.703 net/mana: not in enabled drivers build config 00:02:46.703 net/memif: not in enabled drivers build config 00:02:46.703 net/mlx4: not in enabled drivers build config 00:02:46.703 net/mlx5: not in enabled drivers build config 00:02:46.703 net/mvneta: not in enabled drivers build config 00:02:46.703 net/mvpp2: not in enabled drivers build config 00:02:46.703 net/netvsc: not in enabled drivers build config 00:02:46.703 net/nfb: not in enabled drivers build config 00:02:46.703 net/nfp: not in enabled drivers build config 00:02:46.703 net/ngbe: not in enabled drivers build config 00:02:46.703 net/null: not in enabled drivers build config 00:02:46.703 net/octeontx: not in enabled drivers build config 00:02:46.703 net/octeon_ep: not in enabled drivers build config 00:02:46.703 net/pcap: not in enabled drivers build config 00:02:46.703 net/pfe: not in enabled drivers build config 00:02:46.703 net/qede: not in enabled drivers build config 00:02:46.703 net/ring: not in enabled drivers build config 00:02:46.703 net/sfc: not in enabled drivers build config 00:02:46.703 net/softnic: not in enabled drivers build config 00:02:46.703 net/tap: not in enabled drivers build config 00:02:46.703 net/thunderx: not in enabled drivers build config 00:02:46.703 net/txgbe: not in enabled drivers build config 00:02:46.703 net/vdev_netvsc: not in enabled drivers build config 00:02:46.703 net/vhost: not in enabled drivers build config 00:02:46.703 net/virtio: not in enabled drivers build config 00:02:46.703 net/vmxnet3: not in enabled drivers build config 00:02:46.703 raw/*: missing internal dependency, "rawdev" 00:02:46.703 crypto/armv8: not in enabled drivers build config 00:02:46.703 crypto/bcmfs: not in enabled drivers build config 00:02:46.703 crypto/caam_jr: not in enabled drivers build config 00:02:46.703 crypto/ccp: not in enabled drivers build config 00:02:46.703 crypto/cnxk: not in enabled drivers build config 00:02:46.703 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.703 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.703 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.703 crypto/mlx5: not in enabled drivers build config 00:02:46.703 crypto/mvsam: not in enabled drivers build config 00:02:46.703 crypto/nitrox: not in enabled drivers build config 00:02:46.703 crypto/null: not in enabled drivers build config 00:02:46.703 crypto/octeontx: not in enabled drivers build config 00:02:46.703 crypto/openssl: not in enabled drivers build config 00:02:46.703 crypto/scheduler: not in enabled drivers build config 00:02:46.703 crypto/uadk: not in enabled drivers build config 00:02:46.703 crypto/virtio: not in enabled drivers build config 00:02:46.703 compress/isal: not in enabled drivers build config 00:02:46.703 compress/mlx5: not in enabled drivers build config 00:02:46.703 compress/octeontx: not in enabled drivers build config 00:02:46.703 compress/zlib: not in enabled drivers build config 00:02:46.703 regex/*: missing internal dependency, "regexdev" 00:02:46.703 ml/*: missing internal dependency, "mldev" 00:02:46.703 vdpa/ifc: not in enabled drivers build config 00:02:46.703 vdpa/mlx5: not in enabled drivers build config 00:02:46.703 vdpa/nfp: not in enabled drivers build config 00:02:46.703 vdpa/sfc: not in enabled drivers build config 00:02:46.703 event/*: missing internal dependency, "eventdev" 00:02:46.703 baseband/*: missing internal dependency, "bbdev" 00:02:46.703 gpu/*: missing internal dependency, "gpudev" 00:02:46.703 00:02:46.703 00:02:46.703 Build targets in project: 85 00:02:46.703 00:02:46.703 DPDK 23.11.0 00:02:46.704 00:02:46.704 User defined options 00:02:46.704 buildtype : debug 00:02:46.704 default_library : shared 00:02:46.704 libdir : lib 00:02:46.704 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.704 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:46.704 c_link_args : 00:02:46.704 cpu_instruction_set: native 00:02:46.704 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.704 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.704 enable_docs : false 00:02:46.704 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:46.704 enable_kmods : false 00:02:46.704 tests : false 00:02:46.704 00:02:46.704 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:47.268 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:47.268 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:47.268 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:47.268 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.268 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:47.525 [5/265] Linking static target lib/librte_kvargs.a 00:02:47.525 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.525 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:47.525 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.525 [9/265] Linking static target lib/librte_log.a 00:02:47.526 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.783 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.350 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.350 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.350 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.350 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.350 [16/265] Linking static target lib/librte_telemetry.a 00:02:48.350 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.350 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.350 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.350 [20/265] Linking target lib/librte_log.so.24.0 00:02:48.608 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.608 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:48.608 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:48.866 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.866 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:48.866 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.125 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.125 [28/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:49.125 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.125 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.383 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.383 [32/265] Linking target lib/librte_telemetry.so.24.0 00:02:49.383 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.383 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.640 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:49.640 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.640 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.640 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.640 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.640 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.898 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.898 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.898 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.898 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.898 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.156 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.413 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.413 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.413 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.670 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.670 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.670 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.929 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.929 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.929 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.929 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:50.929 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.189 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.189 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.189 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.189 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.447 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.447 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.448 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.705 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.705 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.705 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:51.705 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.963 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.220 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.220 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.220 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.220 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.220 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.220 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.221 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.221 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.479 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.737 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.737 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.737 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.737 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.737 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.996 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:52.996 [85/265] Linking static target lib/librte_ring.a 00:02:53.255 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.255 [87/265] Linking static target lib/librte_eal.a 00:02:53.515 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.515 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.515 [90/265] Linking static target lib/librte_rcu.a 00:02:53.515 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.515 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.515 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.831 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.831 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.831 [96/265] Linking static target lib/librte_mempool.a 00:02:53.831 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:54.090 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.090 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:54.348 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:54.348 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.607 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:54.607 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:54.607 [104/265] Linking static target lib/librte_mbuf.a 00:02:54.607 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.607 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:54.865 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:54.865 [108/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.865 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:54.865 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:54.865 [111/265] Linking static target lib/librte_net.a 00:02:55.124 [112/265] Linking static target lib/librte_meter.a 00:02:55.382 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:55.382 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.382 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.382 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.382 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.382 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:55.641 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.208 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.208 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.208 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.467 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.467 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.467 [125/265] Linking static target lib/librte_pci.a 00:02:56.467 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.725 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:56.725 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.725 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.725 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.725 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.725 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.726 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.726 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:56.726 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.726 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.726 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:56.726 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.984 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:56.984 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:56.984 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:56.984 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:56.984 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.243 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.243 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.243 [146/265] Linking static target lib/librte_cmdline.a 00:02:57.501 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:57.502 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.502 [149/265] Linking static target lib/librte_ethdev.a 00:02:57.502 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.502 [151/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.502 [152/265] Linking static target lib/librte_timer.a 00:02:57.760 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.760 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:57.760 [155/265] Linking static target lib/librte_hash.a 00:02:57.760 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.019 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.019 [158/265] Linking static target lib/librte_compressdev.a 00:02:58.019 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.277 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.277 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.277 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.536 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.536 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.795 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.795 [166/265] Linking static target lib/librte_dmadev.a 00:02:58.795 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.795 [168/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.795 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.795 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.053 [171/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.053 [172/265] Linking static target lib/librte_cryptodev.a 00:02:59.053 [173/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.053 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.312 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.312 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.312 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:59.570 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.570 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.570 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:59.570 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:59.829 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.829 [183/265] Linking static target lib/librte_power.a 00:03:00.088 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.088 [185/265] Linking static target lib/librte_reorder.a 00:03:00.088 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.347 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.347 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.347 [189/265] Linking static target lib/librte_security.a 00:03:00.347 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.605 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.605 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.173 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.173 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.173 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.173 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.173 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:01.431 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.431 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.690 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.690 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.690 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.690 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:01.948 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:01.948 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.948 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.948 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.948 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:01.948 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.207 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.207 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.207 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.207 [213/265] Linking static target drivers/librte_bus_vdev.a 00:03:02.207 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.207 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.207 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.207 [217/265] Linking static target drivers/librte_bus_pci.a 00:03:02.465 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.465 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.465 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.465 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:02.724 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.724 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:02.724 [224/265] Linking static target drivers/librte_mempool_ring.a 00:03:02.724 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.302 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.302 [227/265] Linking static target lib/librte_vhost.a 00:03:04.241 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.241 [229/265] Linking target lib/librte_eal.so.24.0 00:03:04.500 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:04.500 [231/265] Linking target lib/librte_meter.so.24.0 00:03:04.500 [232/265] Linking target lib/librte_timer.so.24.0 00:03:04.500 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:04.500 [234/265] Linking target lib/librte_pci.so.24.0 00:03:04.500 [235/265] Linking target lib/librte_ring.so.24.0 00:03:04.500 [236/265] Linking target lib/librte_dmadev.so.24.0 00:03:04.500 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:04.500 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:04.500 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:04.500 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:04.500 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:04.500 [242/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.759 [243/265] Linking target lib/librte_rcu.so.24.0 00:03:04.759 [244/265] Linking target lib/librte_mempool.so.24.0 00:03:04.759 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:04.759 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:04.759 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:04.759 [248/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.759 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:04.759 [250/265] Linking target lib/librte_mbuf.so.24.0 00:03:05.017 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:05.017 [252/265] Linking target lib/librte_compressdev.so.24.0 00:03:05.017 [253/265] Linking target lib/librte_reorder.so.24.0 00:03:05.017 [254/265] Linking target lib/librte_net.so.24.0 00:03:05.017 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:03:05.276 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:05.276 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:05.276 [258/265] Linking target lib/librte_cmdline.so.24.0 00:03:05.276 [259/265] Linking target lib/librte_hash.so.24.0 00:03:05.276 [260/265] Linking target lib/librte_security.so.24.0 00:03:05.276 [261/265] Linking target lib/librte_ethdev.so.24.0 00:03:05.535 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:05.535 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:05.535 [264/265] Linking target lib/librte_power.so.24.0 00:03:05.535 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:05.535 INFO: autodetecting backend as ninja 00:03:05.535 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:06.911 CC lib/ut/ut.o 00:03:06.911 CC lib/ut_mock/mock.o 00:03:06.911 CC lib/log/log.o 00:03:06.911 CC lib/log/log_flags.o 00:03:06.911 CC lib/log/log_deprecated.o 00:03:06.911 LIB libspdk_ut_mock.a 00:03:06.911 SO libspdk_ut_mock.so.5.0 00:03:06.911 LIB libspdk_ut.a 00:03:06.911 LIB libspdk_log.a 00:03:06.911 SO libspdk_ut.so.1.0 00:03:06.911 SYMLINK libspdk_ut_mock.so 00:03:06.911 SO libspdk_log.so.6.1 00:03:06.911 SYMLINK libspdk_ut.so 00:03:06.911 SYMLINK libspdk_log.so 00:03:07.170 CC lib/util/base64.o 00:03:07.170 CC lib/util/bit_array.o 00:03:07.170 CC lib/util/cpuset.o 00:03:07.170 CC lib/util/crc32.o 00:03:07.170 CC lib/util/crc16.o 00:03:07.170 CC lib/util/crc32c.o 00:03:07.170 CXX lib/trace_parser/trace.o 00:03:07.170 CC lib/ioat/ioat.o 00:03:07.170 CC lib/dma/dma.o 00:03:07.170 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.170 CC lib/util/crc32_ieee.o 00:03:07.170 CC lib/util/crc64.o 00:03:07.170 CC lib/util/dif.o 00:03:07.170 CC lib/util/fd.o 00:03:07.447 LIB libspdk_dma.a 00:03:07.447 CC lib/util/file.o 00:03:07.447 SO libspdk_dma.so.3.0 00:03:07.447 CC lib/util/hexlify.o 00:03:07.447 CC lib/util/iov.o 00:03:07.447 SYMLINK libspdk_dma.so 00:03:07.447 CC lib/vfio_user/host/vfio_user.o 00:03:07.447 CC lib/util/math.o 00:03:07.447 LIB libspdk_ioat.a 00:03:07.447 CC lib/util/pipe.o 00:03:07.447 SO libspdk_ioat.so.6.0 00:03:07.447 CC lib/util/strerror_tls.o 00:03:07.447 CC lib/util/string.o 00:03:07.447 CC lib/util/uuid.o 00:03:07.447 SYMLINK libspdk_ioat.so 00:03:07.447 CC lib/util/fd_group.o 00:03:07.447 CC lib/util/xor.o 00:03:07.717 CC lib/util/zipf.o 00:03:07.717 LIB libspdk_vfio_user.a 00:03:07.717 SO libspdk_vfio_user.so.4.0 00:03:07.717 SYMLINK libspdk_vfio_user.so 00:03:07.976 LIB libspdk_util.a 00:03:07.976 SO libspdk_util.so.8.0 00:03:07.976 SYMLINK libspdk_util.so 00:03:08.234 LIB libspdk_trace_parser.a 00:03:08.234 SO libspdk_trace_parser.so.4.0 00:03:08.234 CC lib/idxd/idxd.o 00:03:08.234 CC lib/idxd/idxd_user.o 00:03:08.234 CC lib/idxd/idxd_kernel.o 00:03:08.234 CC lib/vmd/led.o 00:03:08.234 CC lib/vmd/vmd.o 00:03:08.234 CC lib/env_dpdk/env.o 00:03:08.234 CC lib/rdma/common.o 00:03:08.234 CC lib/conf/conf.o 00:03:08.234 CC lib/json/json_parse.o 00:03:08.234 SYMLINK libspdk_trace_parser.so 00:03:08.234 CC lib/json/json_util.o 00:03:08.234 CC lib/env_dpdk/memory.o 00:03:08.234 CC lib/json/json_write.o 00:03:08.492 CC lib/rdma/rdma_verbs.o 00:03:08.492 LIB libspdk_conf.a 00:03:08.492 SO libspdk_conf.so.5.0 00:03:08.492 CC lib/env_dpdk/pci.o 00:03:08.492 CC lib/env_dpdk/init.o 00:03:08.492 SYMLINK libspdk_conf.so 00:03:08.492 CC lib/env_dpdk/threads.o 00:03:08.492 CC lib/env_dpdk/pci_ioat.o 00:03:08.750 LIB libspdk_rdma.a 00:03:08.750 CC lib/env_dpdk/pci_virtio.o 00:03:08.750 LIB libspdk_json.a 00:03:08.750 CC lib/env_dpdk/pci_vmd.o 00:03:08.750 SO libspdk_rdma.so.5.0 00:03:08.750 SO libspdk_json.so.5.1 00:03:08.750 SYMLINK libspdk_rdma.so 00:03:08.750 LIB libspdk_idxd.a 00:03:08.750 CC lib/env_dpdk/pci_idxd.o 00:03:08.750 SYMLINK libspdk_json.so 00:03:08.750 CC lib/env_dpdk/pci_event.o 00:03:08.750 SO libspdk_idxd.so.11.0 00:03:08.750 CC lib/env_dpdk/sigbus_handler.o 00:03:08.750 LIB libspdk_vmd.a 00:03:08.750 SYMLINK libspdk_idxd.so 00:03:08.750 CC lib/env_dpdk/pci_dpdk.o 00:03:08.750 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.750 SO libspdk_vmd.so.5.0 00:03:09.009 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.009 CC lib/jsonrpc/jsonrpc_server.o 00:03:09.009 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:09.009 CC lib/jsonrpc/jsonrpc_client.o 00:03:09.009 SYMLINK libspdk_vmd.so 00:03:09.009 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:09.267 LIB libspdk_jsonrpc.a 00:03:09.267 SO libspdk_jsonrpc.so.5.1 00:03:09.267 SYMLINK libspdk_jsonrpc.so 00:03:09.527 CC lib/rpc/rpc.o 00:03:09.527 LIB libspdk_env_dpdk.a 00:03:09.787 LIB libspdk_rpc.a 00:03:09.787 SO libspdk_env_dpdk.so.13.0 00:03:09.787 SO libspdk_rpc.so.5.0 00:03:09.787 SYMLINK libspdk_rpc.so 00:03:09.787 SYMLINK libspdk_env_dpdk.so 00:03:10.046 CC lib/trace/trace.o 00:03:10.046 CC lib/trace/trace_flags.o 00:03:10.046 CC lib/trace/trace_rpc.o 00:03:10.046 CC lib/notify/notify.o 00:03:10.047 CC lib/notify/notify_rpc.o 00:03:10.047 CC lib/sock/sock.o 00:03:10.047 CC lib/sock/sock_rpc.o 00:03:10.047 LIB libspdk_notify.a 00:03:10.305 SO libspdk_notify.so.5.0 00:03:10.305 LIB libspdk_trace.a 00:03:10.305 SYMLINK libspdk_notify.so 00:03:10.305 SO libspdk_trace.so.9.0 00:03:10.305 SYMLINK libspdk_trace.so 00:03:10.305 LIB libspdk_sock.a 00:03:10.305 SO libspdk_sock.so.8.0 00:03:10.564 SYMLINK libspdk_sock.so 00:03:10.564 CC lib/thread/thread.o 00:03:10.564 CC lib/thread/iobuf.o 00:03:10.564 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.564 CC lib/nvme/nvme_fabric.o 00:03:10.564 CC lib/nvme/nvme_ctrlr.o 00:03:10.564 CC lib/nvme/nvme_ns_cmd.o 00:03:10.564 CC lib/nvme/nvme_ns.o 00:03:10.564 CC lib/nvme/nvme_pcie_common.o 00:03:10.564 CC lib/nvme/nvme_pcie.o 00:03:10.564 CC lib/nvme/nvme_qpair.o 00:03:10.821 CC lib/nvme/nvme.o 00:03:11.387 CC lib/nvme/nvme_quirks.o 00:03:11.387 CC lib/nvme/nvme_transport.o 00:03:11.646 CC lib/nvme/nvme_discovery.o 00:03:11.646 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.646 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.646 CC lib/nvme/nvme_tcp.o 00:03:11.646 CC lib/nvme/nvme_opal.o 00:03:11.905 CC lib/nvme/nvme_io_msg.o 00:03:12.163 CC lib/nvme/nvme_poll_group.o 00:03:12.163 LIB libspdk_thread.a 00:03:12.163 SO libspdk_thread.so.9.0 00:03:12.163 CC lib/nvme/nvme_zns.o 00:03:12.163 CC lib/nvme/nvme_cuse.o 00:03:12.163 CC lib/nvme/nvme_vfio_user.o 00:03:12.163 SYMLINK libspdk_thread.so 00:03:12.163 CC lib/nvme/nvme_rdma.o 00:03:12.421 CC lib/accel/accel.o 00:03:12.421 CC lib/blob/blobstore.o 00:03:12.421 CC lib/blob/request.o 00:03:12.679 CC lib/init/json_config.o 00:03:12.937 CC lib/accel/accel_rpc.o 00:03:12.937 CC lib/accel/accel_sw.o 00:03:12.937 CC lib/init/subsystem.o 00:03:12.937 CC lib/virtio/virtio.o 00:03:12.937 CC lib/virtio/virtio_vhost_user.o 00:03:12.937 CC lib/virtio/virtio_vfio_user.o 00:03:13.195 CC lib/init/subsystem_rpc.o 00:03:13.195 CC lib/virtio/virtio_pci.o 00:03:13.195 CC lib/vfu_tgt/tgt_endpoint.o 00:03:13.195 CC lib/init/rpc.o 00:03:13.195 CC lib/blob/zeroes.o 00:03:13.195 CC lib/vfu_tgt/tgt_rpc.o 00:03:13.452 CC lib/blob/blob_bs_dev.o 00:03:13.452 LIB libspdk_accel.a 00:03:13.452 LIB libspdk_init.a 00:03:13.452 SO libspdk_accel.so.14.0 00:03:13.452 SO libspdk_init.so.4.0 00:03:13.452 SYMLINK libspdk_accel.so 00:03:13.452 SYMLINK libspdk_init.so 00:03:13.452 LIB libspdk_virtio.a 00:03:13.452 LIB libspdk_vfu_tgt.a 00:03:13.452 SO libspdk_virtio.so.6.0 00:03:13.452 SO libspdk_vfu_tgt.so.2.0 00:03:13.710 CC lib/bdev/bdev.o 00:03:13.710 CC lib/bdev/bdev_rpc.o 00:03:13.710 CC lib/bdev/bdev_zone.o 00:03:13.710 CC lib/bdev/scsi_nvme.o 00:03:13.710 CC lib/bdev/part.o 00:03:13.710 CC lib/event/app.o 00:03:13.710 LIB libspdk_nvme.a 00:03:13.710 SYMLINK libspdk_virtio.so 00:03:13.710 SYMLINK libspdk_vfu_tgt.so 00:03:13.710 CC lib/event/reactor.o 00:03:13.710 CC lib/event/log_rpc.o 00:03:13.710 CC lib/event/app_rpc.o 00:03:13.710 CC lib/event/scheduler_static.o 00:03:13.710 SO libspdk_nvme.so.12.0 00:03:13.969 SYMLINK libspdk_nvme.so 00:03:13.969 LIB libspdk_event.a 00:03:14.227 SO libspdk_event.so.12.0 00:03:14.227 SYMLINK libspdk_event.so 00:03:15.162 LIB libspdk_blob.a 00:03:15.162 SO libspdk_blob.so.10.1 00:03:15.162 SYMLINK libspdk_blob.so 00:03:15.420 CC lib/blobfs/blobfs.o 00:03:15.420 CC lib/blobfs/tree.o 00:03:15.420 CC lib/lvol/lvol.o 00:03:15.987 LIB libspdk_bdev.a 00:03:15.987 SO libspdk_bdev.so.14.0 00:03:16.245 SYMLINK libspdk_bdev.so 00:03:16.245 LIB libspdk_blobfs.a 00:03:16.245 CC lib/ublk/ublk.o 00:03:16.245 CC lib/nbd/nbd.o 00:03:16.245 CC lib/nbd/nbd_rpc.o 00:03:16.245 CC lib/ublk/ublk_rpc.o 00:03:16.245 CC lib/scsi/dev.o 00:03:16.245 CC lib/scsi/lun.o 00:03:16.245 SO libspdk_blobfs.so.9.0 00:03:16.245 CC lib/ftl/ftl_core.o 00:03:16.245 CC lib/nvmf/ctrlr.o 00:03:16.245 LIB libspdk_lvol.a 00:03:16.504 SYMLINK libspdk_blobfs.so 00:03:16.504 CC lib/scsi/port.o 00:03:16.504 SO libspdk_lvol.so.9.1 00:03:16.504 SYMLINK libspdk_lvol.so 00:03:16.504 CC lib/scsi/scsi.o 00:03:16.504 CC lib/scsi/scsi_bdev.o 00:03:16.504 CC lib/scsi/scsi_pr.o 00:03:16.504 CC lib/scsi/scsi_rpc.o 00:03:16.504 CC lib/scsi/task.o 00:03:16.504 CC lib/nvmf/ctrlr_discovery.o 00:03:16.762 CC lib/ftl/ftl_init.o 00:03:16.762 CC lib/ftl/ftl_layout.o 00:03:16.762 CC lib/ftl/ftl_debug.o 00:03:16.762 LIB libspdk_nbd.a 00:03:16.762 CC lib/nvmf/ctrlr_bdev.o 00:03:16.762 SO libspdk_nbd.so.6.0 00:03:16.762 SYMLINK libspdk_nbd.so 00:03:16.762 CC lib/nvmf/subsystem.o 00:03:16.762 CC lib/ftl/ftl_io.o 00:03:16.762 CC lib/ftl/ftl_sb.o 00:03:17.021 LIB libspdk_ublk.a 00:03:17.021 LIB libspdk_scsi.a 00:03:17.021 CC lib/ftl/ftl_l2p.o 00:03:17.021 SO libspdk_ublk.so.2.0 00:03:17.021 SO libspdk_scsi.so.8.0 00:03:17.021 CC lib/ftl/ftl_l2p_flat.o 00:03:17.021 SYMLINK libspdk_ublk.so 00:03:17.021 CC lib/nvmf/nvmf.o 00:03:17.021 CC lib/ftl/ftl_nv_cache.o 00:03:17.021 CC lib/ftl/ftl_band.o 00:03:17.021 CC lib/nvmf/nvmf_rpc.o 00:03:17.021 SYMLINK libspdk_scsi.so 00:03:17.280 CC lib/iscsi/conn.o 00:03:17.280 CC lib/nvmf/transport.o 00:03:17.280 CC lib/vhost/vhost.o 00:03:17.538 CC lib/vhost/vhost_rpc.o 00:03:17.538 CC lib/vhost/vhost_scsi.o 00:03:17.796 CC lib/iscsi/init_grp.o 00:03:17.796 CC lib/ftl/ftl_band_ops.o 00:03:17.796 CC lib/iscsi/iscsi.o 00:03:17.796 CC lib/iscsi/md5.o 00:03:18.054 CC lib/iscsi/param.o 00:03:18.054 CC lib/iscsi/portal_grp.o 00:03:18.054 CC lib/nvmf/tcp.o 00:03:18.054 CC lib/ftl/ftl_writer.o 00:03:18.054 CC lib/ftl/ftl_rq.o 00:03:18.054 CC lib/ftl/ftl_reloc.o 00:03:18.054 CC lib/iscsi/tgt_node.o 00:03:18.312 CC lib/iscsi/iscsi_subsystem.o 00:03:18.312 CC lib/vhost/vhost_blk.o 00:03:18.312 CC lib/vhost/rte_vhost_user.o 00:03:18.312 CC lib/iscsi/iscsi_rpc.o 00:03:18.312 CC lib/nvmf/vfio_user.o 00:03:18.312 CC lib/nvmf/rdma.o 00:03:18.571 CC lib/ftl/ftl_l2p_cache.o 00:03:18.571 CC lib/iscsi/task.o 00:03:18.571 CC lib/ftl/ftl_p2l.o 00:03:18.571 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.829 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.829 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.829 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:19.088 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:19.088 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:19.088 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:19.088 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:19.346 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:19.346 LIB libspdk_iscsi.a 00:03:19.346 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.346 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.346 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.346 LIB libspdk_vhost.a 00:03:19.346 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:19.346 SO libspdk_iscsi.so.7.0 00:03:19.346 SO libspdk_vhost.so.7.1 00:03:19.605 CC lib/ftl/utils/ftl_conf.o 00:03:19.605 SYMLINK libspdk_iscsi.so 00:03:19.605 CC lib/ftl/utils/ftl_md.o 00:03:19.605 CC lib/ftl/utils/ftl_mempool.o 00:03:19.605 CC lib/ftl/utils/ftl_bitmap.o 00:03:19.605 SYMLINK libspdk_vhost.so 00:03:19.605 CC lib/ftl/utils/ftl_property.o 00:03:19.605 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.605 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.605 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.605 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.863 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.863 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.863 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.863 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.863 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.863 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.863 CC lib/ftl/base/ftl_base_dev.o 00:03:19.863 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.863 CC lib/ftl/ftl_trace.o 00:03:20.122 LIB libspdk_ftl.a 00:03:20.380 SO libspdk_ftl.so.8.0 00:03:20.380 LIB libspdk_nvmf.a 00:03:20.639 SO libspdk_nvmf.so.17.0 00:03:20.639 SYMLINK libspdk_ftl.so 00:03:20.639 SYMLINK libspdk_nvmf.so 00:03:20.898 CC module/vfu_device/vfu_virtio.o 00:03:20.898 CC module/env_dpdk/env_dpdk_rpc.o 00:03:21.156 CC module/sock/uring/uring.o 00:03:21.156 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:21.156 CC module/accel/dsa/accel_dsa.o 00:03:21.156 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:21.156 CC module/sock/posix/posix.o 00:03:21.156 CC module/accel/error/accel_error.o 00:03:21.156 CC module/accel/ioat/accel_ioat.o 00:03:21.156 CC module/blob/bdev/blob_bdev.o 00:03:21.156 LIB libspdk_env_dpdk_rpc.a 00:03:21.156 SO libspdk_env_dpdk_rpc.so.5.0 00:03:21.156 LIB libspdk_scheduler_dpdk_governor.a 00:03:21.156 CC module/accel/error/accel_error_rpc.o 00:03:21.156 SYMLINK libspdk_env_dpdk_rpc.so 00:03:21.156 CC module/accel/ioat/accel_ioat_rpc.o 00:03:21.156 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:21.156 LIB libspdk_scheduler_dynamic.a 00:03:21.156 SO libspdk_scheduler_dynamic.so.3.0 00:03:21.156 CC module/accel/dsa/accel_dsa_rpc.o 00:03:21.156 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:21.415 CC module/vfu_device/vfu_virtio_blk.o 00:03:21.415 SYMLINK libspdk_scheduler_dynamic.so 00:03:21.415 LIB libspdk_blob_bdev.a 00:03:21.415 LIB libspdk_accel_ioat.a 00:03:21.415 LIB libspdk_accel_error.a 00:03:21.415 CC module/accel/iaa/accel_iaa.o 00:03:21.415 SO libspdk_blob_bdev.so.10.1 00:03:21.415 SO libspdk_accel_error.so.1.0 00:03:21.415 SO libspdk_accel_ioat.so.5.0 00:03:21.415 LIB libspdk_accel_dsa.a 00:03:21.415 CC module/scheduler/gscheduler/gscheduler.o 00:03:21.415 SO libspdk_accel_dsa.so.4.0 00:03:21.415 SYMLINK libspdk_blob_bdev.so 00:03:21.415 SYMLINK libspdk_accel_ioat.so 00:03:21.415 SYMLINK libspdk_accel_error.so 00:03:21.415 CC module/vfu_device/vfu_virtio_scsi.o 00:03:21.415 SYMLINK libspdk_accel_dsa.so 00:03:21.415 CC module/vfu_device/vfu_virtio_rpc.o 00:03:21.674 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.674 LIB libspdk_scheduler_gscheduler.a 00:03:21.674 SO libspdk_scheduler_gscheduler.so.3.0 00:03:21.674 CC module/bdev/delay/vbdev_delay.o 00:03:21.674 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.674 SYMLINK libspdk_scheduler_gscheduler.so 00:03:21.674 CC module/bdev/error/vbdev_error.o 00:03:21.674 LIB libspdk_sock_uring.a 00:03:21.674 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.674 LIB libspdk_accel_iaa.a 00:03:21.674 SO libspdk_sock_uring.so.4.0 00:03:21.674 CC module/bdev/gpt/gpt.o 00:03:21.674 LIB libspdk_sock_posix.a 00:03:21.674 SO libspdk_accel_iaa.so.2.0 00:03:21.674 SO libspdk_sock_posix.so.5.0 00:03:21.932 SYMLINK libspdk_sock_uring.so 00:03:21.932 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.932 SYMLINK libspdk_accel_iaa.so 00:03:21.932 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.932 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.932 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.932 LIB libspdk_vfu_device.a 00:03:21.932 SYMLINK libspdk_sock_posix.so 00:03:21.932 SO libspdk_vfu_device.so.2.0 00:03:21.932 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.932 SYMLINK libspdk_vfu_device.so 00:03:21.932 LIB libspdk_bdev_error.a 00:03:21.932 LIB libspdk_bdev_delay.a 00:03:21.932 LIB libspdk_blobfs_bdev.a 00:03:21.932 CC module/bdev/malloc/bdev_malloc.o 00:03:21.932 SO libspdk_bdev_error.so.5.0 00:03:21.932 SO libspdk_bdev_delay.so.5.0 00:03:21.932 SO libspdk_blobfs_bdev.so.5.0 00:03:22.191 CC module/bdev/null/bdev_null.o 00:03:22.191 CC module/bdev/nvme/bdev_nvme.o 00:03:22.191 CC module/bdev/passthru/vbdev_passthru.o 00:03:22.191 SYMLINK libspdk_bdev_error.so 00:03:22.191 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:22.191 SYMLINK libspdk_bdev_delay.so 00:03:22.191 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:22.191 SYMLINK libspdk_blobfs_bdev.so 00:03:22.191 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:22.191 LIB libspdk_bdev_gpt.a 00:03:22.191 CC module/bdev/raid/bdev_raid.o 00:03:22.191 SO libspdk_bdev_gpt.so.5.0 00:03:22.191 CC module/bdev/raid/bdev_raid_rpc.o 00:03:22.191 LIB libspdk_bdev_lvol.a 00:03:22.450 SYMLINK libspdk_bdev_gpt.so 00:03:22.450 CC module/bdev/null/bdev_null_rpc.o 00:03:22.450 SO libspdk_bdev_lvol.so.5.0 00:03:22.450 LIB libspdk_bdev_passthru.a 00:03:22.450 LIB libspdk_bdev_malloc.a 00:03:22.450 CC module/bdev/split/vbdev_split.o 00:03:22.450 SYMLINK libspdk_bdev_lvol.so 00:03:22.450 SO libspdk_bdev_passthru.so.5.0 00:03:22.450 SO libspdk_bdev_malloc.so.5.0 00:03:22.450 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:22.450 SYMLINK libspdk_bdev_passthru.so 00:03:22.450 SYMLINK libspdk_bdev_malloc.so 00:03:22.450 CC module/bdev/nvme/nvme_rpc.o 00:03:22.450 CC module/bdev/raid/bdev_raid_sb.o 00:03:22.450 CC module/bdev/raid/raid0.o 00:03:22.450 LIB libspdk_bdev_null.a 00:03:22.450 CC module/bdev/uring/bdev_uring.o 00:03:22.450 SO libspdk_bdev_null.so.5.0 00:03:22.708 SYMLINK libspdk_bdev_null.so 00:03:22.708 CC module/bdev/uring/bdev_uring_rpc.o 00:03:22.708 CC module/bdev/split/vbdev_split_rpc.o 00:03:22.708 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.708 CC module/bdev/nvme/vbdev_opal.o 00:03:22.708 CC module/bdev/raid/raid1.o 00:03:22.708 CC module/bdev/raid/concat.o 00:03:22.708 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.966 LIB libspdk_bdev_split.a 00:03:22.966 LIB libspdk_bdev_uring.a 00:03:22.966 SO libspdk_bdev_split.so.5.0 00:03:22.966 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.966 SO libspdk_bdev_uring.so.5.0 00:03:22.966 CC module/bdev/aio/bdev_aio.o 00:03:22.966 SYMLINK libspdk_bdev_split.so 00:03:22.966 LIB libspdk_bdev_zone_block.a 00:03:22.966 SYMLINK libspdk_bdev_uring.so 00:03:22.966 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.966 SO libspdk_bdev_zone_block.so.5.0 00:03:22.966 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.966 CC module/bdev/ftl/bdev_ftl.o 00:03:22.966 SYMLINK libspdk_bdev_zone_block.so 00:03:22.966 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:23.225 CC module/bdev/iscsi/bdev_iscsi.o 00:03:23.225 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:23.225 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:23.225 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:23.225 LIB libspdk_bdev_raid.a 00:03:23.225 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:23.225 SO libspdk_bdev_raid.so.5.0 00:03:23.225 LIB libspdk_bdev_aio.a 00:03:23.225 SO libspdk_bdev_aio.so.5.0 00:03:23.225 SYMLINK libspdk_bdev_raid.so 00:03:23.225 SYMLINK libspdk_bdev_aio.so 00:03:23.225 LIB libspdk_bdev_ftl.a 00:03:23.483 SO libspdk_bdev_ftl.so.5.0 00:03:23.483 SYMLINK libspdk_bdev_ftl.so 00:03:23.483 LIB libspdk_bdev_iscsi.a 00:03:23.483 SO libspdk_bdev_iscsi.so.5.0 00:03:23.483 SYMLINK libspdk_bdev_iscsi.so 00:03:23.740 LIB libspdk_bdev_virtio.a 00:03:23.740 SO libspdk_bdev_virtio.so.5.0 00:03:23.740 SYMLINK libspdk_bdev_virtio.so 00:03:24.315 LIB libspdk_bdev_nvme.a 00:03:24.315 SO libspdk_bdev_nvme.so.6.0 00:03:24.315 SYMLINK libspdk_bdev_nvme.so 00:03:24.582 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.582 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.582 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.582 CC module/event/subsystems/vmd/vmd.o 00:03:24.582 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.582 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:24.582 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.582 CC module/event/subsystems/sock/sock.o 00:03:24.840 LIB libspdk_event_vhost_blk.a 00:03:24.840 LIB libspdk_event_vmd.a 00:03:24.840 LIB libspdk_event_sock.a 00:03:24.840 LIB libspdk_event_iobuf.a 00:03:24.840 LIB libspdk_event_scheduler.a 00:03:24.840 SO libspdk_event_vhost_blk.so.2.0 00:03:24.840 LIB libspdk_event_vfu_tgt.a 00:03:24.840 SO libspdk_event_vmd.so.5.0 00:03:24.840 SO libspdk_event_sock.so.4.0 00:03:24.840 SO libspdk_event_scheduler.so.3.0 00:03:24.840 SO libspdk_event_vfu_tgt.so.2.0 00:03:24.840 SO libspdk_event_iobuf.so.2.0 00:03:24.840 SYMLINK libspdk_event_vhost_blk.so 00:03:24.840 SYMLINK libspdk_event_vmd.so 00:03:24.840 SYMLINK libspdk_event_sock.so 00:03:24.840 SYMLINK libspdk_event_scheduler.so 00:03:24.840 SYMLINK libspdk_event_vfu_tgt.so 00:03:24.841 SYMLINK libspdk_event_iobuf.so 00:03:25.099 CC module/event/subsystems/accel/accel.o 00:03:25.099 LIB libspdk_event_accel.a 00:03:25.358 SO libspdk_event_accel.so.5.0 00:03:25.358 SYMLINK libspdk_event_accel.so 00:03:25.358 CC module/event/subsystems/bdev/bdev.o 00:03:25.617 LIB libspdk_event_bdev.a 00:03:25.617 SO libspdk_event_bdev.so.5.0 00:03:25.874 SYMLINK libspdk_event_bdev.so 00:03:25.874 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:25.874 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.874 CC module/event/subsystems/scsi/scsi.o 00:03:25.874 CC module/event/subsystems/ublk/ublk.o 00:03:25.874 CC module/event/subsystems/nbd/nbd.o 00:03:26.132 LIB libspdk_event_scsi.a 00:03:26.132 LIB libspdk_event_ublk.a 00:03:26.132 LIB libspdk_event_nbd.a 00:03:26.132 SO libspdk_event_scsi.so.5.0 00:03:26.132 SO libspdk_event_ublk.so.2.0 00:03:26.132 SO libspdk_event_nbd.so.5.0 00:03:26.132 LIB libspdk_event_nvmf.a 00:03:26.132 SYMLINK libspdk_event_scsi.so 00:03:26.132 SO libspdk_event_nvmf.so.5.0 00:03:26.132 SYMLINK libspdk_event_ublk.so 00:03:26.132 SYMLINK libspdk_event_nbd.so 00:03:26.132 SYMLINK libspdk_event_nvmf.so 00:03:26.390 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.390 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.390 LIB libspdk_event_vhost_scsi.a 00:03:26.390 LIB libspdk_event_iscsi.a 00:03:26.649 SO libspdk_event_vhost_scsi.so.2.0 00:03:26.649 SO libspdk_event_iscsi.so.5.0 00:03:26.649 SYMLINK libspdk_event_vhost_scsi.so 00:03:26.649 SYMLINK libspdk_event_iscsi.so 00:03:26.649 SO libspdk.so.5.0 00:03:26.649 SYMLINK libspdk.so 00:03:26.907 CXX app/trace/trace.o 00:03:26.907 TEST_HEADER include/spdk/accel.h 00:03:26.907 TEST_HEADER include/spdk/accel_module.h 00:03:26.907 TEST_HEADER include/spdk/assert.h 00:03:26.907 TEST_HEADER include/spdk/barrier.h 00:03:26.907 TEST_HEADER include/spdk/base64.h 00:03:26.907 TEST_HEADER include/spdk/bdev.h 00:03:26.907 TEST_HEADER include/spdk/bdev_module.h 00:03:26.907 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.907 TEST_HEADER include/spdk/bit_array.h 00:03:26.907 TEST_HEADER include/spdk/bit_pool.h 00:03:26.907 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.907 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.907 TEST_HEADER include/spdk/blobfs.h 00:03:26.907 TEST_HEADER include/spdk/blob.h 00:03:26.907 TEST_HEADER include/spdk/conf.h 00:03:26.907 TEST_HEADER include/spdk/config.h 00:03:26.907 TEST_HEADER include/spdk/cpuset.h 00:03:26.907 TEST_HEADER include/spdk/crc16.h 00:03:26.907 TEST_HEADER include/spdk/crc32.h 00:03:26.907 TEST_HEADER include/spdk/crc64.h 00:03:26.907 TEST_HEADER include/spdk/dif.h 00:03:26.907 TEST_HEADER include/spdk/dma.h 00:03:26.907 CC examples/accel/perf/accel_perf.o 00:03:26.907 TEST_HEADER include/spdk/endian.h 00:03:26.907 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.907 TEST_HEADER include/spdk/env.h 00:03:26.907 TEST_HEADER include/spdk/event.h 00:03:26.907 CC test/event/event_perf/event_perf.o 00:03:26.907 TEST_HEADER include/spdk/fd_group.h 00:03:26.907 TEST_HEADER include/spdk/fd.h 00:03:26.907 TEST_HEADER include/spdk/file.h 00:03:26.907 TEST_HEADER include/spdk/ftl.h 00:03:26.907 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.907 TEST_HEADER include/spdk/hexlify.h 00:03:26.907 TEST_HEADER include/spdk/histogram_data.h 00:03:26.907 TEST_HEADER include/spdk/idxd.h 00:03:26.907 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.907 TEST_HEADER include/spdk/init.h 00:03:26.907 TEST_HEADER include/spdk/ioat.h 00:03:26.907 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.907 CC test/accel/dif/dif.o 00:03:26.907 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.907 TEST_HEADER include/spdk/json.h 00:03:26.907 CC test/dma/test_dma/test_dma.o 00:03:26.907 CC test/app/bdev_svc/bdev_svc.o 00:03:26.907 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.907 CC test/blobfs/mkfs/mkfs.o 00:03:26.907 TEST_HEADER include/spdk/likely.h 00:03:26.907 CC test/bdev/bdevio/bdevio.o 00:03:26.907 TEST_HEADER include/spdk/log.h 00:03:26.907 TEST_HEADER include/spdk/lvol.h 00:03:26.907 TEST_HEADER include/spdk/memory.h 00:03:26.907 TEST_HEADER include/spdk/mmio.h 00:03:26.907 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.907 TEST_HEADER include/spdk/nbd.h 00:03:26.907 TEST_HEADER include/spdk/notify.h 00:03:26.907 TEST_HEADER include/spdk/nvme.h 00:03:26.907 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.907 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.907 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.907 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.907 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.907 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.907 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.907 TEST_HEADER include/spdk/nvmf.h 00:03:27.166 TEST_HEADER include/spdk/nvmf_spec.h 00:03:27.166 TEST_HEADER include/spdk/nvmf_transport.h 00:03:27.166 TEST_HEADER include/spdk/opal.h 00:03:27.166 TEST_HEADER include/spdk/opal_spec.h 00:03:27.166 TEST_HEADER include/spdk/pci_ids.h 00:03:27.166 TEST_HEADER include/spdk/pipe.h 00:03:27.166 TEST_HEADER include/spdk/queue.h 00:03:27.166 TEST_HEADER include/spdk/reduce.h 00:03:27.166 TEST_HEADER include/spdk/rpc.h 00:03:27.166 TEST_HEADER include/spdk/scheduler.h 00:03:27.166 TEST_HEADER include/spdk/scsi.h 00:03:27.166 TEST_HEADER include/spdk/scsi_spec.h 00:03:27.166 TEST_HEADER include/spdk/sock.h 00:03:27.166 TEST_HEADER include/spdk/stdinc.h 00:03:27.166 TEST_HEADER include/spdk/string.h 00:03:27.166 TEST_HEADER include/spdk/thread.h 00:03:27.166 TEST_HEADER include/spdk/trace.h 00:03:27.166 TEST_HEADER include/spdk/trace_parser.h 00:03:27.166 TEST_HEADER include/spdk/tree.h 00:03:27.166 TEST_HEADER include/spdk/ublk.h 00:03:27.166 TEST_HEADER include/spdk/util.h 00:03:27.166 TEST_HEADER include/spdk/uuid.h 00:03:27.166 TEST_HEADER include/spdk/version.h 00:03:27.166 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:27.166 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:27.166 TEST_HEADER include/spdk/vhost.h 00:03:27.166 TEST_HEADER include/spdk/vmd.h 00:03:27.166 TEST_HEADER include/spdk/xor.h 00:03:27.166 TEST_HEADER include/spdk/zipf.h 00:03:27.166 CXX test/cpp_headers/accel.o 00:03:27.166 LINK event_perf 00:03:27.166 LINK bdev_svc 00:03:27.166 LINK mkfs 00:03:27.166 CXX test/cpp_headers/accel_module.o 00:03:27.424 LINK spdk_trace 00:03:27.424 CC test/event/reactor/reactor.o 00:03:27.424 LINK bdevio 00:03:27.424 LINK dif 00:03:27.424 LINK test_dma 00:03:27.424 LINK accel_perf 00:03:27.424 CXX test/cpp_headers/assert.o 00:03:27.424 CC test/event/reactor_perf/reactor_perf.o 00:03:27.424 LINK reactor 00:03:27.424 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.683 CC app/trace_record/trace_record.o 00:03:27.683 CXX test/cpp_headers/barrier.o 00:03:27.683 LINK reactor_perf 00:03:27.683 LINK mem_callbacks 00:03:27.683 CC app/nvmf_tgt/nvmf_main.o 00:03:27.683 CXX test/cpp_headers/base64.o 00:03:27.683 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.683 CC app/spdk_tgt/spdk_tgt.o 00:03:27.683 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.683 LINK spdk_trace_record 00:03:27.942 CC test/event/app_repeat/app_repeat.o 00:03:27.942 CC test/env/vtophys/vtophys.o 00:03:27.942 CXX test/cpp_headers/bdev.o 00:03:27.942 CC test/event/scheduler/scheduler.o 00:03:27.942 LINK nvmf_tgt 00:03:27.942 CXX test/cpp_headers/bdev_module.o 00:03:27.942 LINK nvme_fuzz 00:03:27.942 LINK iscsi_tgt 00:03:27.942 LINK spdk_tgt 00:03:27.942 LINK vtophys 00:03:27.942 LINK hello_bdev 00:03:27.942 LINK app_repeat 00:03:28.201 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:28.201 LINK scheduler 00:03:28.201 CXX test/cpp_headers/bdev_zone.o 00:03:28.201 CC test/app/histogram_perf/histogram_perf.o 00:03:28.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.201 CXX test/cpp_headers/bit_array.o 00:03:28.201 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:28.201 CC app/spdk_lspci/spdk_lspci.o 00:03:28.201 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.459 LINK histogram_perf 00:03:28.459 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.459 CXX test/cpp_headers/bit_pool.o 00:03:28.459 CC test/lvol/esnap/esnap.o 00:03:28.459 LINK spdk_lspci 00:03:28.459 LINK env_dpdk_post_init 00:03:28.459 CC examples/ioat/perf/perf.o 00:03:28.459 CC examples/blob/hello_world/hello_blob.o 00:03:28.459 CXX test/cpp_headers/blob_bdev.o 00:03:28.459 CC examples/blob/cli/blobcli.o 00:03:28.718 CC app/spdk_nvme_perf/perf.o 00:03:28.718 CC test/env/memory/memory_ut.o 00:03:28.718 LINK ioat_perf 00:03:28.718 LINK vhost_fuzz 00:03:28.718 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.718 LINK hello_blob 00:03:28.977 CXX test/cpp_headers/blobfs.o 00:03:28.977 CC examples/ioat/verify/verify.o 00:03:28.977 CC test/app/jsoncat/jsoncat.o 00:03:28.977 CC test/app/stub/stub.o 00:03:28.977 CXX test/cpp_headers/blob.o 00:03:28.977 LINK blobcli 00:03:29.235 LINK verify 00:03:29.235 LINK bdevperf 00:03:29.235 LINK jsoncat 00:03:29.235 LINK stub 00:03:29.235 CXX test/cpp_headers/conf.o 00:03:29.235 CXX test/cpp_headers/config.o 00:03:29.235 CC test/env/pci/pci_ut.o 00:03:29.235 CXX test/cpp_headers/cpuset.o 00:03:29.235 CC app/spdk_nvme_identify/identify.o 00:03:29.494 CC test/nvme/aer/aer.o 00:03:29.494 CC test/rpc_client/rpc_client_test.o 00:03:29.494 CC examples/nvme/hello_world/hello_world.o 00:03:29.494 LINK spdk_nvme_perf 00:03:29.494 CXX test/cpp_headers/crc16.o 00:03:29.494 LINK memory_ut 00:03:29.494 LINK rpc_client_test 00:03:29.752 CXX test/cpp_headers/crc32.o 00:03:29.752 LINK aer 00:03:29.752 LINK hello_world 00:03:29.752 LINK pci_ut 00:03:29.752 CC test/nvme/reset/reset.o 00:03:29.752 LINK iscsi_fuzz 00:03:29.752 CC examples/nvme/reconnect/reconnect.o 00:03:29.752 CXX test/cpp_headers/crc64.o 00:03:29.752 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.011 CC test/nvme/sgl/sgl.o 00:03:30.011 CC test/nvme/e2edp/nvme_dp.o 00:03:30.011 CXX test/cpp_headers/dif.o 00:03:30.011 CC test/nvme/overhead/overhead.o 00:03:30.011 LINK reset 00:03:30.011 CC test/nvme/err_injection/err_injection.o 00:03:30.011 LINK spdk_nvme_identify 00:03:30.270 LINK reconnect 00:03:30.270 CXX test/cpp_headers/dma.o 00:03:30.270 LINK sgl 00:03:30.270 LINK nvme_dp 00:03:30.270 LINK err_injection 00:03:30.270 CC test/thread/poller_perf/poller_perf.o 00:03:30.270 LINK overhead 00:03:30.270 LINK nvme_manage 00:03:30.270 CXX test/cpp_headers/endian.o 00:03:30.270 CC app/spdk_nvme_discover/discovery_aer.o 00:03:30.270 CC examples/nvme/arbitration/arbitration.o 00:03:30.529 CC test/nvme/startup/startup.o 00:03:30.529 CC test/nvme/reserve/reserve.o 00:03:30.529 LINK poller_perf 00:03:30.530 CC test/nvme/simple_copy/simple_copy.o 00:03:30.530 CXX test/cpp_headers/env_dpdk.o 00:03:30.530 CC test/nvme/connect_stress/connect_stress.o 00:03:30.530 LINK spdk_nvme_discover 00:03:30.530 CC test/nvme/boot_partition/boot_partition.o 00:03:30.530 LINK startup 00:03:30.530 LINK reserve 00:03:30.530 CC test/nvme/compliance/nvme_compliance.o 00:03:30.788 CXX test/cpp_headers/env.o 00:03:30.788 LINK arbitration 00:03:30.788 LINK connect_stress 00:03:30.788 LINK simple_copy 00:03:30.788 LINK boot_partition 00:03:30.788 CC app/spdk_top/spdk_top.o 00:03:30.788 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.788 CC examples/nvme/hotplug/hotplug.o 00:03:30.788 CXX test/cpp_headers/event.o 00:03:31.047 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.047 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:31.047 CC examples/nvme/abort/abort.o 00:03:31.047 LINK nvme_compliance 00:03:31.047 LINK fused_ordering 00:03:31.047 CC app/vhost/vhost.o 00:03:31.047 CXX test/cpp_headers/fd_group.o 00:03:31.047 LINK hotplug 00:03:31.047 LINK cmb_copy 00:03:31.047 LINK doorbell_aers 00:03:31.047 CC test/nvme/fdp/fdp.o 00:03:31.305 LINK vhost 00:03:31.306 CXX test/cpp_headers/fd.o 00:03:31.306 CC app/spdk_dd/spdk_dd.o 00:03:31.306 CXX test/cpp_headers/file.o 00:03:31.306 LINK abort 00:03:31.306 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.306 CXX test/cpp_headers/ftl.o 00:03:31.306 CC app/fio/nvme/fio_plugin.o 00:03:31.564 CXX test/cpp_headers/gpt_spec.o 00:03:31.564 CXX test/cpp_headers/hexlify.o 00:03:31.564 CC app/fio/bdev/fio_plugin.o 00:03:31.564 LINK fdp 00:03:31.564 LINK pmr_persistence 00:03:31.564 LINK spdk_dd 00:03:31.564 LINK spdk_top 00:03:31.564 CXX test/cpp_headers/histogram_data.o 00:03:31.823 CC test/nvme/cuse/cuse.o 00:03:31.823 CC examples/sock/hello_world/hello_sock.o 00:03:31.823 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.823 CC examples/vmd/led/led.o 00:03:31.823 CXX test/cpp_headers/idxd.o 00:03:31.823 LINK lsvmd 00:03:31.823 LINK led 00:03:31.823 CC examples/util/zipf/zipf.o 00:03:31.823 CC examples/nvmf/nvmf/nvmf.o 00:03:31.823 LINK spdk_nvme 00:03:32.082 LINK hello_sock 00:03:32.082 LINK spdk_bdev 00:03:32.082 CXX test/cpp_headers/idxd_spec.o 00:03:32.082 LINK zipf 00:03:32.082 CXX test/cpp_headers/init.o 00:03:32.082 CXX test/cpp_headers/ioat.o 00:03:32.082 CC examples/idxd/perf/perf.o 00:03:32.082 CXX test/cpp_headers/ioat_spec.o 00:03:32.082 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.082 CC examples/thread/thread/thread_ex.o 00:03:32.341 CXX test/cpp_headers/iscsi_spec.o 00:03:32.341 LINK nvmf 00:03:32.341 CXX test/cpp_headers/json.o 00:03:32.341 CXX test/cpp_headers/jsonrpc.o 00:03:32.341 CXX test/cpp_headers/likely.o 00:03:32.341 LINK interrupt_tgt 00:03:32.342 CXX test/cpp_headers/log.o 00:03:32.342 CXX test/cpp_headers/lvol.o 00:03:32.342 LINK thread 00:03:32.342 CXX test/cpp_headers/memory.o 00:03:32.342 CXX test/cpp_headers/mmio.o 00:03:32.600 LINK idxd_perf 00:03:32.600 CXX test/cpp_headers/nbd.o 00:03:32.600 CXX test/cpp_headers/notify.o 00:03:32.600 CXX test/cpp_headers/nvme.o 00:03:32.600 CXX test/cpp_headers/nvme_intel.o 00:03:32.600 CXX test/cpp_headers/nvme_ocssd.o 00:03:32.600 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:32.600 CXX test/cpp_headers/nvme_spec.o 00:03:32.600 CXX test/cpp_headers/nvme_zns.o 00:03:32.600 CXX test/cpp_headers/nvmf_cmd.o 00:03:32.600 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:32.600 CXX test/cpp_headers/nvmf.o 00:03:32.600 CXX test/cpp_headers/nvmf_spec.o 00:03:32.858 LINK cuse 00:03:32.859 CXX test/cpp_headers/nvmf_transport.o 00:03:32.859 CXX test/cpp_headers/opal.o 00:03:32.859 CXX test/cpp_headers/opal_spec.o 00:03:32.859 CXX test/cpp_headers/pci_ids.o 00:03:32.859 CXX test/cpp_headers/pipe.o 00:03:32.859 CXX test/cpp_headers/queue.o 00:03:32.859 CXX test/cpp_headers/reduce.o 00:03:32.859 CXX test/cpp_headers/rpc.o 00:03:32.859 CXX test/cpp_headers/scheduler.o 00:03:32.859 CXX test/cpp_headers/scsi.o 00:03:32.859 CXX test/cpp_headers/scsi_spec.o 00:03:32.859 CXX test/cpp_headers/sock.o 00:03:33.118 CXX test/cpp_headers/stdinc.o 00:03:33.118 CXX test/cpp_headers/string.o 00:03:33.118 CXX test/cpp_headers/thread.o 00:03:33.118 LINK esnap 00:03:33.118 CXX test/cpp_headers/trace.o 00:03:33.118 CXX test/cpp_headers/trace_parser.o 00:03:33.118 CXX test/cpp_headers/tree.o 00:03:33.118 CXX test/cpp_headers/ublk.o 00:03:33.118 CXX test/cpp_headers/util.o 00:03:33.118 CXX test/cpp_headers/uuid.o 00:03:33.118 CXX test/cpp_headers/version.o 00:03:33.118 CXX test/cpp_headers/vfio_user_pci.o 00:03:33.118 CXX test/cpp_headers/vfio_user_spec.o 00:03:33.118 CXX test/cpp_headers/vhost.o 00:03:33.118 CXX test/cpp_headers/vmd.o 00:03:33.377 CXX test/cpp_headers/xor.o 00:03:33.377 CXX test/cpp_headers/zipf.o 00:03:33.377 ************************************ 00:03:33.377 END TEST make 00:03:33.377 ************************************ 00:03:33.377 00:03:33.377 real 0m58.582s 00:03:33.377 user 6m27.112s 00:03:33.377 sys 1m22.338s 00:03:33.377 00:15:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:33.377 00:15:49 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.637 00:15:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.637 00:15:49 -- nvmf/common.sh@7 -- # uname -s 00:03:33.637 00:15:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.637 00:15:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.637 00:15:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.637 00:15:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.637 00:15:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.637 00:15:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.638 00:15:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.638 00:15:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.638 00:15:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.638 00:15:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.638 00:15:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:03:33.638 00:15:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:03:33.638 00:15:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.638 00:15:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.638 00:15:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:33.638 00:15:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.638 00:15:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.638 00:15:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.638 00:15:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.638 00:15:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.638 00:15:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.638 00:15:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.638 00:15:49 -- paths/export.sh@5 -- # export PATH 00:03:33.638 00:15:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.638 00:15:49 -- nvmf/common.sh@46 -- # : 0 00:03:33.638 00:15:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:33.638 00:15:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:33.638 00:15:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:33.638 00:15:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.638 00:15:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.638 00:15:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:33.638 00:15:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:33.638 00:15:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:33.638 00:15:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.638 00:15:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.638 00:15:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.638 00:15:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.638 00:15:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.638 00:15:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.638 00:15:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.638 00:15:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.638 00:15:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.638 00:15:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.638 00:15:49 -- spdk/autotest.sh@48 -- # udevadm_pid=48028 00:03:33.638 00:15:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.638 00:15:49 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:33.638 00:15:49 -- spdk/autotest.sh@54 -- # echo 48030 00:03:33.638 00:15:49 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:33.638 00:15:49 -- spdk/autotest.sh@56 -- # echo 48031 00:03:33.638 00:15:49 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:33.638 00:15:49 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:33.638 00:15:49 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:33.638 00:15:49 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:33.638 00:15:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:33.638 00:15:49 -- common/autotest_common.sh@10 -- # set +x 00:03:33.638 00:15:49 -- spdk/autotest.sh@70 -- # create_test_list 00:03:33.638 00:15:49 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:33.638 00:15:49 -- common/autotest_common.sh@10 -- # set +x 00:03:33.638 00:15:49 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:33.638 00:15:49 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:33.638 00:15:49 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:33.638 00:15:49 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:33.638 00:15:49 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:33.638 00:15:49 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:33.638 00:15:49 -- common/autotest_common.sh@1440 -- # uname 00:03:33.638 00:15:49 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:33.638 00:15:49 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:33.638 00:15:49 -- common/autotest_common.sh@1460 -- # uname 00:03:33.638 00:15:49 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:33.638 00:15:49 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:33.638 00:15:49 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:33.638 00:15:49 -- spdk/autotest.sh@83 -- # hash lcov 00:03:33.638 00:15:49 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:33.638 00:15:49 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:33.638 --rc lcov_branch_coverage=1 00:03:33.638 --rc lcov_function_coverage=1 00:03:33.638 --rc genhtml_branch_coverage=1 00:03:33.638 --rc genhtml_function_coverage=1 00:03:33.638 --rc genhtml_legend=1 00:03:33.638 --rc geninfo_all_blocks=1 00:03:33.638 ' 00:03:33.638 00:15:49 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:33.638 --rc lcov_branch_coverage=1 00:03:33.638 --rc lcov_function_coverage=1 00:03:33.638 --rc genhtml_branch_coverage=1 00:03:33.638 --rc genhtml_function_coverage=1 00:03:33.638 --rc genhtml_legend=1 00:03:33.638 --rc geninfo_all_blocks=1 00:03:33.638 ' 00:03:33.638 00:15:49 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:33.638 --rc lcov_branch_coverage=1 00:03:33.638 --rc lcov_function_coverage=1 00:03:33.638 --rc genhtml_branch_coverage=1 00:03:33.638 --rc genhtml_function_coverage=1 00:03:33.638 --rc genhtml_legend=1 00:03:33.638 --rc geninfo_all_blocks=1 00:03:33.638 --no-external' 00:03:33.638 00:15:49 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:33.638 --rc lcov_branch_coverage=1 00:03:33.638 --rc lcov_function_coverage=1 00:03:33.638 --rc genhtml_branch_coverage=1 00:03:33.638 --rc genhtml_function_coverage=1 00:03:33.638 --rc genhtml_legend=1 00:03:33.638 --rc geninfo_all_blocks=1 00:03:33.638 --no-external' 00:03:33.638 00:15:49 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:33.897 lcov: LCOV version 1.15 00:03:33.897 00:15:49 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:42.014 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:42.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:42.014 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:42.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:42.014 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:42.014 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:00.128 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:00.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:01.503 00:16:17 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:01.503 00:16:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:01.503 00:16:17 -- common/autotest_common.sh@10 -- # set +x 00:04:01.503 00:16:17 -- spdk/autotest.sh@102 -- # rm -f 00:04:01.503 00:16:17 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.438 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:02.438 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:04:02.438 00:16:18 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:02.438 00:16:18 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:02.438 00:16:18 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:02.438 00:16:18 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:02.438 00:16:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.438 00:16:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:02.438 00:16:18 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:02.438 00:16:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.438 00:16:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:02.438 00:16:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:02.438 00:16:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.438 00:16:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:02.438 00:16:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:02.438 00:16:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.438 00:16:18 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:02.438 00:16:18 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:02.438 00:16:18 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:02.438 00:16:18 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.438 00:16:18 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:02.438 00:16:18 -- spdk/autotest.sh@121 -- # grep -v p 00:04:02.438 00:16:18 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:04:02.438 00:16:18 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.438 00:16:18 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.438 00:16:18 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:02.438 00:16:18 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:02.438 00:16:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:02.438 No valid GPT data, bailing 00:04:02.438 00:16:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.438 00:16:18 -- scripts/common.sh@393 -- # pt= 00:04:02.438 00:16:18 -- scripts/common.sh@394 -- # return 1 00:04:02.438 00:16:18 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:02.438 1+0 records in 00:04:02.438 1+0 records out 00:04:02.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00315604 s, 332 MB/s 00:04:02.438 00:16:18 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.438 00:16:18 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.438 00:16:18 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:04:02.438 00:16:18 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:04:02.438 00:16:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:02.438 No valid GPT data, bailing 00:04:02.438 00:16:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:02.438 00:16:18 -- scripts/common.sh@393 -- # pt= 00:04:02.438 00:16:18 -- scripts/common.sh@394 -- # return 1 00:04:02.438 00:16:18 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:02.438 1+0 records in 00:04:02.438 1+0 records out 00:04:02.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00388294 s, 270 MB/s 00:04:02.439 00:16:18 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.439 00:16:18 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.439 00:16:18 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:04:02.439 00:16:18 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:04:02.439 00:16:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:02.439 No valid GPT data, bailing 00:04:02.439 00:16:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:02.697 00:16:18 -- scripts/common.sh@393 -- # pt= 00:04:02.697 00:16:18 -- scripts/common.sh@394 -- # return 1 00:04:02.697 00:16:18 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:02.697 1+0 records in 00:04:02.697 1+0 records out 00:04:02.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418037 s, 251 MB/s 00:04:02.697 00:16:18 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:02.697 00:16:18 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:02.697 00:16:18 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:04:02.697 00:16:18 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:04:02.697 00:16:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:02.697 No valid GPT data, bailing 00:04:02.697 00:16:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:02.697 00:16:18 -- scripts/common.sh@393 -- # pt= 00:04:02.697 00:16:18 -- scripts/common.sh@394 -- # return 1 00:04:02.697 00:16:18 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:02.697 1+0 records in 00:04:02.697 1+0 records out 00:04:02.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428461 s, 245 MB/s 00:04:02.697 00:16:18 -- spdk/autotest.sh@129 -- # sync 00:04:02.956 00:16:18 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:02.956 00:16:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:02.956 00:16:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.861 00:16:20 -- spdk/autotest.sh@135 -- # uname -s 00:04:04.861 00:16:20 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:04.861 00:16:20 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:04.861 00:16:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.861 00:16:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.861 00:16:20 -- common/autotest_common.sh@10 -- # set +x 00:04:04.861 ************************************ 00:04:04.861 START TEST setup.sh 00:04:04.861 ************************************ 00:04:04.861 00:16:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:04.861 * Looking for test storage... 00:04:04.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.861 00:16:20 -- setup/test-setup.sh@10 -- # uname -s 00:04:04.861 00:16:20 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:04.861 00:16:20 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:04.861 00:16:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.861 00:16:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.861 00:16:20 -- common/autotest_common.sh@10 -- # set +x 00:04:04.861 ************************************ 00:04:04.861 START TEST acl 00:04:04.861 ************************************ 00:04:04.861 00:16:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:05.120 * Looking for test storage... 00:04:05.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:05.120 00:16:20 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:05.120 00:16:20 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:05.120 00:16:20 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:05.120 00:16:20 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:05.120 00:16:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:05.120 00:16:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:05.120 00:16:20 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:05.120 00:16:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:05.120 00:16:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:05.120 00:16:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:05.120 00:16:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:05.120 00:16:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:05.120 00:16:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:05.120 00:16:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:05.120 00:16:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:05.120 00:16:20 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:05.120 00:16:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:05.120 00:16:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:05.120 00:16:20 -- setup/acl.sh@12 -- # devs=() 00:04:05.120 00:16:20 -- setup/acl.sh@12 -- # declare -a devs 00:04:05.120 00:16:20 -- setup/acl.sh@13 -- # drivers=() 00:04:05.120 00:16:20 -- setup/acl.sh@13 -- # declare -A drivers 00:04:05.120 00:16:20 -- setup/acl.sh@51 -- # setup reset 00:04:05.120 00:16:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.120 00:16:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.690 00:16:21 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:05.690 00:16:21 -- setup/acl.sh@16 -- # local dev driver 00:04:05.690 00:16:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.690 00:16:21 -- setup/acl.sh@15 -- # setup output status 00:04:05.690 00:16:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.690 00:16:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:05.949 Hugepages 00:04:05.949 node hugesize free / total 00:04:05.949 00:16:21 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:05.949 00:16:21 -- setup/acl.sh@19 -- # continue 00:04:05.949 00:16:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.949 00:04:05.949 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.949 00:16:21 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:05.949 00:16:21 -- setup/acl.sh@19 -- # continue 00:04:05.949 00:16:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.949 00:16:21 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:05.949 00:16:21 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:05.949 00:16:21 -- setup/acl.sh@20 -- # continue 00:04:05.949 00:16:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.208 00:16:21 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:06.208 00:16:21 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.208 00:16:21 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:06.208 00:16:21 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.208 00:16:21 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.208 00:16:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.208 00:16:21 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:04:06.208 00:16:21 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.208 00:16:21 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:06.208 00:16:21 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.208 00:16:21 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.208 00:16:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.208 00:16:21 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:06.208 00:16:21 -- setup/acl.sh@54 -- # run_test denied denied 00:04:06.208 00:16:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.208 00:16:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.208 00:16:21 -- common/autotest_common.sh@10 -- # set +x 00:04:06.208 ************************************ 00:04:06.208 START TEST denied 00:04:06.208 ************************************ 00:04:06.208 00:16:21 -- common/autotest_common.sh@1104 -- # denied 00:04:06.208 00:16:21 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:06.208 00:16:21 -- setup/acl.sh@38 -- # setup output config 00:04:06.208 00:16:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.208 00:16:21 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:06.208 00:16:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.173 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:07.173 00:16:22 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:07.173 00:16:22 -- setup/acl.sh@28 -- # local dev driver 00:04:07.173 00:16:22 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.173 00:16:22 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:07.173 00:16:22 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:07.173 00:16:22 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.173 00:16:22 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.173 00:16:22 -- setup/acl.sh@41 -- # setup reset 00:04:07.173 00:16:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.173 00:16:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.740 00:04:07.740 real 0m1.543s 00:04:07.740 user 0m0.656s 00:04:07.740 sys 0m0.835s 00:04:07.740 00:16:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.740 ************************************ 00:04:07.740 END TEST denied 00:04:07.740 ************************************ 00:04:07.740 00:16:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.740 00:16:23 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:07.740 00:16:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.740 00:16:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.740 00:16:23 -- common/autotest_common.sh@10 -- # set +x 00:04:07.740 ************************************ 00:04:07.740 START TEST allowed 00:04:07.740 ************************************ 00:04:07.740 00:16:23 -- common/autotest_common.sh@1104 -- # allowed 00:04:07.740 00:16:23 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:07.740 00:16:23 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:07.740 00:16:23 -- setup/acl.sh@45 -- # setup output config 00:04:07.740 00:16:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.740 00:16:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.678 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.678 00:16:24 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:08.678 00:16:24 -- setup/acl.sh@28 -- # local dev driver 00:04:08.678 00:16:24 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:08.678 00:16:24 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:08.678 00:16:24 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:08.678 00:16:24 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:08.678 00:16:24 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:08.678 00:16:24 -- setup/acl.sh@48 -- # setup reset 00:04:08.678 00:16:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.678 00:16:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.246 ************************************ 00:04:09.246 END TEST allowed 00:04:09.246 ************************************ 00:04:09.246 00:04:09.246 real 0m1.529s 00:04:09.246 user 0m0.691s 00:04:09.246 sys 0m0.846s 00:04:09.246 00:16:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.246 00:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.246 ************************************ 00:04:09.246 END TEST acl 00:04:09.246 ************************************ 00:04:09.246 00:04:09.246 real 0m4.386s 00:04:09.246 user 0m1.904s 00:04:09.246 sys 0m2.465s 00:04:09.246 00:16:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.246 00:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.507 00:16:25 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:09.507 00:16:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.507 00:16:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.507 00:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.507 ************************************ 00:04:09.507 START TEST hugepages 00:04:09.507 ************************************ 00:04:09.507 00:16:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:09.507 * Looking for test storage... 00:04:09.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:09.507 00:16:25 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:09.507 00:16:25 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:09.507 00:16:25 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:09.507 00:16:25 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:09.507 00:16:25 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:09.507 00:16:25 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:09.507 00:16:25 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:09.507 00:16:25 -- setup/common.sh@18 -- # local node= 00:04:09.507 00:16:25 -- setup/common.sh@19 -- # local var val 00:04:09.507 00:16:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.507 00:16:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.507 00:16:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.507 00:16:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.507 00:16:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.507 00:16:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 5979460 kB' 'MemAvailable: 7354600 kB' 'Buffers: 2684 kB' 'Cached: 1588704 kB' 'SwapCached: 0 kB' 'Active: 440596 kB' 'Inactive: 1253048 kB' 'Active(anon): 112764 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 104076 kB' 'Mapped: 50808 kB' 'Shmem: 10508 kB' 'KReclaimable: 62412 kB' 'Slab: 155768 kB' 'SReclaimable: 62412 kB' 'SUnreclaim: 93356 kB' 'KernelStack: 6476 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 303756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55208 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.507 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.507 00:16:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # continue 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.508 00:16:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.508 00:16:25 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:09.508 00:16:25 -- setup/common.sh@33 -- # echo 2048 00:04:09.508 00:16:25 -- setup/common.sh@33 -- # return 0 00:04:09.508 00:16:25 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:09.508 00:16:25 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:09.508 00:16:25 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:09.508 00:16:25 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:09.508 00:16:25 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:09.508 00:16:25 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:09.508 00:16:25 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:09.508 00:16:25 -- setup/hugepages.sh@207 -- # get_nodes 00:04:09.508 00:16:25 -- setup/hugepages.sh@27 -- # local node 00:04:09.508 00:16:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.508 00:16:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:09.508 00:16:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.508 00:16:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.508 00:16:25 -- setup/hugepages.sh@208 -- # clear_hp 00:04:09.508 00:16:25 -- setup/hugepages.sh@37 -- # local node hp 00:04:09.508 00:16:25 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.508 00:16:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.508 00:16:25 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.508 00:16:25 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.508 00:16:25 -- setup/hugepages.sh@41 -- # echo 0 00:04:09.508 00:16:25 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.508 00:16:25 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.508 00:16:25 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:09.508 00:16:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.508 00:16:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.508 00:16:25 -- common/autotest_common.sh@10 -- # set +x 00:04:09.508 ************************************ 00:04:09.508 START TEST default_setup 00:04:09.508 ************************************ 00:04:09.508 00:16:25 -- common/autotest_common.sh@1104 -- # default_setup 00:04:09.508 00:16:25 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:09.508 00:16:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.508 00:16:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:09.508 00:16:25 -- setup/hugepages.sh@51 -- # shift 00:04:09.508 00:16:25 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:09.508 00:16:25 -- setup/hugepages.sh@52 -- # local node_ids 00:04:09.508 00:16:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.508 00:16:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.508 00:16:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:09.508 00:16:25 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:09.508 00:16:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.508 00:16:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.508 00:16:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.508 00:16:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.508 00:16:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.508 00:16:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:09.508 00:16:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:09.508 00:16:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:09.508 00:16:25 -- setup/hugepages.sh@73 -- # return 0 00:04:09.508 00:16:25 -- setup/hugepages.sh@137 -- # setup output 00:04:09.508 00:16:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.508 00:16:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.336 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.336 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.336 00:16:26 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:10.336 00:16:26 -- setup/hugepages.sh@89 -- # local node 00:04:10.336 00:16:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.336 00:16:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.336 00:16:26 -- setup/hugepages.sh@92 -- # local surp 00:04:10.336 00:16:26 -- setup/hugepages.sh@93 -- # local resv 00:04:10.336 00:16:26 -- setup/hugepages.sh@94 -- # local anon 00:04:10.336 00:16:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.336 00:16:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.336 00:16:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.336 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:10.336 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:10.336 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.336 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.336 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.336 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.336 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.336 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.336 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.336 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094544 kB' 'MemAvailable: 9469572 kB' 'Buffers: 2684 kB' 'Cached: 1588692 kB' 'SwapCached: 0 kB' 'Active: 456880 kB' 'Inactive: 1253052 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 120160 kB' 'Mapped: 50920 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155616 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93432 kB' 'KernelStack: 6480 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.336 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.337 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.337 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.338 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:10.338 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:10.338 00:16:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.338 00:16:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.338 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.338 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:10.338 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:10.338 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.338 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.338 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.338 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.338 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.338 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094544 kB' 'MemAvailable: 9469572 kB' 'Buffers: 2684 kB' 'Cached: 1588692 kB' 'SwapCached: 0 kB' 'Active: 456540 kB' 'Inactive: 1253052 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119716 kB' 'Mapped: 50868 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155620 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93436 kB' 'KernelStack: 6448 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.338 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.338 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.339 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.339 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.601 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.601 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.601 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.601 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.601 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.602 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:10.602 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:10.602 00:16:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.602 00:16:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.602 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.602 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:10.602 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:10.602 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.602 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.602 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.602 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.602 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.602 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094292 kB' 'MemAvailable: 9469320 kB' 'Buffers: 2684 kB' 'Cached: 1588692 kB' 'SwapCached: 0 kB' 'Active: 456196 kB' 'Inactive: 1253052 kB' 'Active(anon): 128364 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155612 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93428 kB' 'KernelStack: 6448 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.602 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.602 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.603 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.603 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.604 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:10.604 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:10.604 nr_hugepages=1024 00:04:10.604 resv_hugepages=0 00:04:10.604 surplus_hugepages=0 00:04:10.604 anon_hugepages=0 00:04:10.604 00:16:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.604 00:16:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.604 00:16:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.604 00:16:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.604 00:16:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.604 00:16:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.604 00:16:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.604 00:16:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.604 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.604 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:10.604 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:10.604 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.604 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.604 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.604 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.604 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.604 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094292 kB' 'MemAvailable: 9469320 kB' 'Buffers: 2684 kB' 'Cached: 1588692 kB' 'SwapCached: 0 kB' 'Active: 456108 kB' 'Inactive: 1253052 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119364 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155612 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93428 kB' 'KernelStack: 6432 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.604 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.604 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.605 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.605 00:16:26 -- setup/common.sh@33 -- # echo 1024 00:04:10.605 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:10.605 00:16:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.605 00:16:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.605 00:16:26 -- setup/hugepages.sh@27 -- # local node 00:04:10.605 00:16:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.605 00:16:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.605 00:16:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.605 00:16:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.605 00:16:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.605 00:16:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.605 00:16:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.605 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.605 00:16:26 -- setup/common.sh@18 -- # local node=0 00:04:10.605 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:10.605 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.605 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.605 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.605 00:16:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.605 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.605 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.605 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8094292 kB' 'MemUsed: 4144828 kB' 'SwapCached: 0 kB' 'Active: 456244 kB' 'Inactive: 1253052 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253052 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1591376 kB' 'Mapped: 50808 kB' 'AnonPages: 119500 kB' 'Shmem: 10484 kB' 'KernelStack: 6464 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155612 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.606 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.606 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # continue 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.607 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.607 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.607 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:10.607 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:10.607 node0=1024 expecting 1024 00:04:10.607 ************************************ 00:04:10.607 END TEST default_setup 00:04:10.607 ************************************ 00:04:10.607 00:16:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.607 00:16:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.607 00:16:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.607 00:16:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.607 00:16:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.607 00:16:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.607 00:04:10.607 real 0m1.040s 00:04:10.607 user 0m0.489s 00:04:10.607 sys 0m0.457s 00:04:10.607 00:16:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.607 00:16:26 -- common/autotest_common.sh@10 -- # set +x 00:04:10.607 00:16:26 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:10.607 00:16:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.607 00:16:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.607 00:16:26 -- common/autotest_common.sh@10 -- # set +x 00:04:10.607 ************************************ 00:04:10.607 START TEST per_node_1G_alloc 00:04:10.607 ************************************ 00:04:10.607 00:16:26 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:10.607 00:16:26 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:10.607 00:16:26 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:10.607 00:16:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.607 00:16:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:10.607 00:16:26 -- setup/hugepages.sh@51 -- # shift 00:04:10.607 00:16:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:10.607 00:16:26 -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.607 00:16:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.607 00:16:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.607 00:16:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:10.607 00:16:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:10.607 00:16:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.607 00:16:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.607 00:16:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.607 00:16:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.607 00:16:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.607 00:16:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:10.607 00:16:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.607 00:16:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.607 00:16:26 -- setup/hugepages.sh@73 -- # return 0 00:04:10.607 00:16:26 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:10.607 00:16:26 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:10.607 00:16:26 -- setup/hugepages.sh@146 -- # setup output 00:04:10.607 00:16:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.607 00:16:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.129 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.129 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.129 00:16:26 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:11.129 00:16:26 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:11.129 00:16:26 -- setup/hugepages.sh@89 -- # local node 00:04:11.129 00:16:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.129 00:16:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.129 00:16:26 -- setup/hugepages.sh@92 -- # local surp 00:04:11.129 00:16:26 -- setup/hugepages.sh@93 -- # local resv 00:04:11.129 00:16:26 -- setup/hugepages.sh@94 -- # local anon 00:04:11.129 00:16:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.129 00:16:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.129 00:16:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.129 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:11.129 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:11.129 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.129 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.129 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.129 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.129 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.129 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9149844 kB' 'MemAvailable: 10524884 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456476 kB' 'Inactive: 1253064 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119752 kB' 'Mapped: 50984 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155584 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93400 kB' 'KernelStack: 6456 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.129 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.129 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.130 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.130 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:11.130 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:11.130 00:16:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.130 00:16:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.130 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.130 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:11.130 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:11.130 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.130 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.130 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.130 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.130 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.130 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.130 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9150024 kB' 'MemAvailable: 10525064 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456184 kB' 'Inactive: 1253064 kB' 'Active(anon): 128352 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155596 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93412 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.131 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.131 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.132 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:11.132 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:11.132 00:16:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.132 00:16:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.132 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.132 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:11.132 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:11.132 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.132 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.132 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.132 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.132 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.132 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9150024 kB' 'MemAvailable: 10525064 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456188 kB' 'Inactive: 1253064 kB' 'Active(anon): 128356 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119488 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155596 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93412 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.132 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.132 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.133 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:11.133 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:11.133 00:16:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.133 nr_hugepages=512 00:04:11.133 00:16:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:11.133 resv_hugepages=0 00:04:11.133 surplus_hugepages=0 00:04:11.133 00:16:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.133 00:16:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.133 anon_hugepages=0 00:04:11.133 00:16:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.133 00:16:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.133 00:16:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:11.133 00:16:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.133 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.133 00:16:26 -- setup/common.sh@18 -- # local node= 00:04:11.133 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:11.133 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.133 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.133 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.133 00:16:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.133 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.133 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9149776 kB' 'MemAvailable: 10524816 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456200 kB' 'Inactive: 1253064 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119492 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155588 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93404 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.133 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.133 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.134 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.134 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.135 00:16:26 -- setup/common.sh@33 -- # echo 512 00:04:11.135 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:11.135 00:16:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.135 00:16:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.135 00:16:26 -- setup/hugepages.sh@27 -- # local node 00:04:11.135 00:16:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.135 00:16:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.135 00:16:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.135 00:16:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.135 00:16:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.135 00:16:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.135 00:16:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.135 00:16:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.135 00:16:26 -- setup/common.sh@18 -- # local node=0 00:04:11.135 00:16:26 -- setup/common.sh@19 -- # local var val 00:04:11.135 00:16:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.135 00:16:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.135 00:16:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.135 00:16:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.135 00:16:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.135 00:16:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9149776 kB' 'MemUsed: 3089344 kB' 'SwapCached: 0 kB' 'Active: 456148 kB' 'Inactive: 1253064 kB' 'Active(anon): 128316 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1591380 kB' 'Mapped: 50808 kB' 'AnonPages: 119400 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155588 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.135 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.135 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # continue 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.136 00:16:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.136 00:16:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.136 00:16:26 -- setup/common.sh@33 -- # echo 0 00:04:11.136 00:16:26 -- setup/common.sh@33 -- # return 0 00:04:11.136 00:16:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.136 00:16:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.136 00:16:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.136 node0=512 expecting 512 00:04:11.136 00:16:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.136 00:16:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.136 00:04:11.136 real 0m0.539s 00:04:11.136 user 0m0.278s 00:04:11.136 sys 0m0.297s 00:04:11.136 00:16:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.136 00:16:26 -- common/autotest_common.sh@10 -- # set +x 00:04:11.136 ************************************ 00:04:11.136 END TEST per_node_1G_alloc 00:04:11.136 ************************************ 00:04:11.136 00:16:26 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:11.136 00:16:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.136 00:16:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.136 00:16:26 -- common/autotest_common.sh@10 -- # set +x 00:04:11.136 ************************************ 00:04:11.136 START TEST even_2G_alloc 00:04:11.136 ************************************ 00:04:11.136 00:16:26 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:11.136 00:16:26 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:11.136 00:16:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.136 00:16:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.136 00:16:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.136 00:16:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.136 00:16:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.136 00:16:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.136 00:16:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.136 00:16:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.136 00:16:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.136 00:16:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:11.136 00:16:26 -- setup/hugepages.sh@83 -- # : 0 00:04:11.136 00:16:26 -- setup/hugepages.sh@84 -- # : 0 00:04:11.136 00:16:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.136 00:16:26 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:11.136 00:16:26 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:11.136 00:16:26 -- setup/hugepages.sh@153 -- # setup output 00:04:11.136 00:16:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.136 00:16:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.708 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.708 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.708 00:16:27 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:11.708 00:16:27 -- setup/hugepages.sh@89 -- # local node 00:04:11.708 00:16:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.708 00:16:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.708 00:16:27 -- setup/hugepages.sh@92 -- # local surp 00:04:11.708 00:16:27 -- setup/hugepages.sh@93 -- # local resv 00:04:11.709 00:16:27 -- setup/hugepages.sh@94 -- # local anon 00:04:11.709 00:16:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.709 00:16:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.709 00:16:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.709 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:11.709 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:11.709 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.709 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.709 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.709 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.709 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.709 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8098372 kB' 'MemAvailable: 9473412 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456412 kB' 'Inactive: 1253064 kB' 'Active(anon): 128580 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119680 kB' 'Mapped: 50908 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155588 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93404 kB' 'KernelStack: 6440 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.709 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.709 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.710 00:16:27 -- setup/common.sh@33 -- # echo 0 00:04:11.710 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:11.710 00:16:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.710 00:16:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.710 00:16:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.710 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:11.710 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:11.710 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.710 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.710 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.710 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.710 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.710 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8098372 kB' 'MemAvailable: 9473412 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456228 kB' 'Inactive: 1253064 kB' 'Active(anon): 128396 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155612 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93428 kB' 'KernelStack: 6432 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.710 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.710 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.711 00:16:27 -- setup/common.sh@33 -- # echo 0 00:04:11.711 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:11.711 00:16:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.711 00:16:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.711 00:16:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.711 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:11.711 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:11.711 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.711 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.711 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.711 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.711 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.711 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8098796 kB' 'MemAvailable: 9473836 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 455984 kB' 'Inactive: 1253064 kB' 'Active(anon): 128152 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119212 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155608 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93424 kB' 'KernelStack: 6432 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.711 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.711 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.712 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.712 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.713 00:16:27 -- setup/common.sh@33 -- # echo 0 00:04:11.713 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:11.713 00:16:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.713 nr_hugepages=1024 00:04:11.713 00:16:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.713 resv_hugepages=0 00:04:11.713 00:16:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.713 surplus_hugepages=0 00:04:11.713 00:16:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.713 anon_hugepages=0 00:04:11.713 00:16:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.713 00:16:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.713 00:16:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.713 00:16:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.713 00:16:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.713 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:11.713 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:11.713 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.713 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.713 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.713 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.713 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.713 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8101500 kB' 'MemAvailable: 9476540 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456244 kB' 'Inactive: 1253064 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119516 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155608 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93424 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.713 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.713 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.714 00:16:27 -- setup/common.sh@33 -- # echo 1024 00:04:11.714 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:11.714 00:16:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.714 00:16:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.714 00:16:27 -- setup/hugepages.sh@27 -- # local node 00:04:11.714 00:16:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.714 00:16:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:11.714 00:16:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.714 00:16:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.714 00:16:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.714 00:16:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.714 00:16:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.714 00:16:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.714 00:16:27 -- setup/common.sh@18 -- # local node=0 00:04:11.714 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:11.714 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.714 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.714 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.714 00:16:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.714 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.714 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8101500 kB' 'MemUsed: 4137620 kB' 'SwapCached: 0 kB' 'Active: 456208 kB' 'Inactive: 1253064 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 1591380 kB' 'Mapped: 50808 kB' 'AnonPages: 119504 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155604 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.714 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.714 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # continue 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.715 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.715 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.715 00:16:27 -- setup/common.sh@33 -- # echo 0 00:04:11.715 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:11.715 00:16:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.715 00:16:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.715 00:16:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.715 node0=1024 expecting 1024 00:04:11.715 00:16:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:11.715 00:16:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:11.715 00:04:11.715 real 0m0.530s 00:04:11.715 user 0m0.251s 00:04:11.715 sys 0m0.315s 00:04:11.715 00:16:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.715 00:16:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.715 ************************************ 00:04:11.715 END TEST even_2G_alloc 00:04:11.715 ************************************ 00:04:11.715 00:16:27 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:11.715 00:16:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.715 00:16:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.715 00:16:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.715 ************************************ 00:04:11.715 START TEST odd_alloc 00:04:11.715 ************************************ 00:04:11.715 00:16:27 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:11.715 00:16:27 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:11.715 00:16:27 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:11.715 00:16:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:11.715 00:16:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.715 00:16:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.715 00:16:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.715 00:16:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:11.715 00:16:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.715 00:16:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.715 00:16:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.715 00:16:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:11.715 00:16:27 -- setup/hugepages.sh@83 -- # : 0 00:04:11.715 00:16:27 -- setup/hugepages.sh@84 -- # : 0 00:04:11.715 00:16:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.715 00:16:27 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:11.715 00:16:27 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:11.715 00:16:27 -- setup/hugepages.sh@160 -- # setup output 00:04:11.715 00:16:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.715 00:16:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.288 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.288 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.288 00:16:27 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:12.288 00:16:27 -- setup/hugepages.sh@89 -- # local node 00:04:12.288 00:16:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.288 00:16:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.288 00:16:27 -- setup/hugepages.sh@92 -- # local surp 00:04:12.288 00:16:27 -- setup/hugepages.sh@93 -- # local resv 00:04:12.288 00:16:27 -- setup/hugepages.sh@94 -- # local anon 00:04:12.288 00:16:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.288 00:16:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.288 00:16:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.288 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:12.288 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:12.288 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.288 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.288 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.288 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.288 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.288 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.288 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.288 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8107028 kB' 'MemAvailable: 9482068 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456396 kB' 'Inactive: 1253064 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119520 kB' 'Mapped: 50936 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155528 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93344 kB' 'KernelStack: 6468 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.289 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.289 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.290 00:16:27 -- setup/common.sh@33 -- # echo 0 00:04:12.290 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:12.290 00:16:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.290 00:16:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.290 00:16:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.290 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:12.290 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:12.290 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.290 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.290 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.290 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.290 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.290 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106776 kB' 'MemAvailable: 9481816 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456144 kB' 'Inactive: 1253064 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 119188 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155536 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6460 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.290 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.290 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.291 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.291 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.292 00:16:27 -- setup/common.sh@33 -- # echo 0 00:04:12.292 00:16:27 -- setup/common.sh@33 -- # return 0 00:04:12.292 00:16:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.292 00:16:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.292 00:16:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.292 00:16:27 -- setup/common.sh@18 -- # local node= 00:04:12.292 00:16:27 -- setup/common.sh@19 -- # local var val 00:04:12.292 00:16:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.292 00:16:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.292 00:16:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.292 00:16:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.292 00:16:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.292 00:16:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106776 kB' 'MemAvailable: 9481816 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456144 kB' 'Inactive: 1253064 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155536 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6460 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.292 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.292 00:16:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:27 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.293 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.293 00:16:28 -- setup/common.sh@33 -- # echo 0 00:04:12.293 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.293 00:16:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.293 00:16:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:12.293 nr_hugepages=1025 00:04:12.293 resv_hugepages=0 00:04:12.293 00:16:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.293 surplus_hugepages=0 00:04:12.293 00:16:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.293 anon_hugepages=0 00:04:12.293 00:16:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.293 00:16:28 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.293 00:16:28 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:12.293 00:16:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.293 00:16:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.293 00:16:28 -- setup/common.sh@18 -- # local node= 00:04:12.293 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.293 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.293 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.293 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.293 00:16:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.293 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.293 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.293 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106776 kB' 'MemAvailable: 9481816 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456404 kB' 'Inactive: 1253064 kB' 'Active(anon): 128572 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119448 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155536 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93352 kB' 'KernelStack: 6460 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.294 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.294 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.295 00:16:28 -- setup/common.sh@33 -- # echo 1025 00:04:12.295 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.295 00:16:28 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:12.295 00:16:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.295 00:16:28 -- setup/hugepages.sh@27 -- # local node 00:04:12.295 00:16:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.295 00:16:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:12.295 00:16:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.295 00:16:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.295 00:16:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.295 00:16:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.295 00:16:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.295 00:16:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.295 00:16:28 -- setup/common.sh@18 -- # local node=0 00:04:12.295 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.295 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.295 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.295 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.295 00:16:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.295 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.295 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8106776 kB' 'MemUsed: 4132344 kB' 'SwapCached: 0 kB' 'Active: 456380 kB' 'Inactive: 1253064 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1591380 kB' 'Mapped: 50808 kB' 'AnonPages: 119460 kB' 'Shmem: 10484 kB' 'KernelStack: 6460 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155524 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.295 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.295 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.296 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.296 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.296 00:16:28 -- setup/common.sh@33 -- # echo 0 00:04:12.296 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.296 00:16:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.296 00:16:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.296 00:16:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.296 00:16:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.296 node0=1025 expecting 1025 00:04:12.296 00:16:28 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:12.296 00:16:28 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:12.296 00:04:12.296 real 0m0.545s 00:04:12.296 user 0m0.279s 00:04:12.296 sys 0m0.302s 00:04:12.296 00:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.296 00:16:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.296 ************************************ 00:04:12.296 END TEST odd_alloc 00:04:12.296 ************************************ 00:04:12.296 00:16:28 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:12.296 00:16:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.296 00:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.296 00:16:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.296 ************************************ 00:04:12.296 START TEST custom_alloc 00:04:12.296 ************************************ 00:04:12.296 00:16:28 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:12.296 00:16:28 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:12.296 00:16:28 -- setup/hugepages.sh@169 -- # local node 00:04:12.296 00:16:28 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:12.296 00:16:28 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:12.296 00:16:28 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:12.296 00:16:28 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:12.297 00:16:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:12.297 00:16:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:12.297 00:16:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.297 00:16:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.297 00:16:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.297 00:16:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.297 00:16:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.297 00:16:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.297 00:16:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.297 00:16:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:12.297 00:16:28 -- setup/hugepages.sh@83 -- # : 0 00:04:12.297 00:16:28 -- setup/hugepages.sh@84 -- # : 0 00:04:12.297 00:16:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:12.297 00:16:28 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:12.297 00:16:28 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:12.297 00:16:28 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:12.297 00:16:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.297 00:16:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.297 00:16:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:12.297 00:16:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:12.297 00:16:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.297 00:16:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.297 00:16:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:12.297 00:16:28 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:12.297 00:16:28 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:12.297 00:16:28 -- setup/hugepages.sh@78 -- # return 0 00:04:12.297 00:16:28 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:12.297 00:16:28 -- setup/hugepages.sh@187 -- # setup output 00:04:12.297 00:16:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.297 00:16:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.868 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.868 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.868 00:16:28 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:12.868 00:16:28 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:12.868 00:16:28 -- setup/hugepages.sh@89 -- # local node 00:04:12.868 00:16:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.868 00:16:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.868 00:16:28 -- setup/hugepages.sh@92 -- # local surp 00:04:12.868 00:16:28 -- setup/hugepages.sh@93 -- # local resv 00:04:12.868 00:16:28 -- setup/hugepages.sh@94 -- # local anon 00:04:12.868 00:16:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.868 00:16:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.868 00:16:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.868 00:16:28 -- setup/common.sh@18 -- # local node= 00:04:12.868 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.868 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.868 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.868 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.868 00:16:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.868 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.868 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9152476 kB' 'MemAvailable: 10527516 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456812 kB' 'Inactive: 1253064 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120060 kB' 'Mapped: 50912 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155532 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93348 kB' 'KernelStack: 6408 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 321752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.868 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.868 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.869 00:16:28 -- setup/common.sh@33 -- # echo 0 00:04:12.869 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.869 00:16:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.869 00:16:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.869 00:16:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.869 00:16:28 -- setup/common.sh@18 -- # local node= 00:04:12.869 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.869 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.869 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.869 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.869 00:16:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.869 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.869 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9152736 kB' 'MemAvailable: 10527776 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456240 kB' 'Inactive: 1253064 kB' 'Active(anon): 128408 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119344 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155524 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93340 kB' 'KernelStack: 6432 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.869 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.869 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.870 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.870 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.871 00:16:28 -- setup/common.sh@33 -- # echo 0 00:04:12.871 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.871 00:16:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.871 00:16:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.871 00:16:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.871 00:16:28 -- setup/common.sh@18 -- # local node= 00:04:12.871 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.871 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.871 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.871 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.871 00:16:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.871 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.871 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9152816 kB' 'MemAvailable: 10527856 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 455860 kB' 'Inactive: 1253064 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119236 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155504 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93320 kB' 'KernelStack: 6432 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.871 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.871 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.872 00:16:28 -- setup/common.sh@33 -- # echo 0 00:04:12.872 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.872 nr_hugepages=512 00:04:12.872 00:16:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.872 00:16:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:12.872 resv_hugepages=0 00:04:12.872 surplus_hugepages=0 00:04:12.872 anon_hugepages=0 00:04:12.872 00:16:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.872 00:16:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.872 00:16:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.872 00:16:28 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.872 00:16:28 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:12.872 00:16:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.872 00:16:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.872 00:16:28 -- setup/common.sh@18 -- # local node= 00:04:12.872 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.872 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.872 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.872 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.872 00:16:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.872 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.872 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9152816 kB' 'MemAvailable: 10527856 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456144 kB' 'Inactive: 1253064 kB' 'Active(anon): 128312 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119516 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155504 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93320 kB' 'KernelStack: 6448 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.872 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.872 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.873 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.873 00:16:28 -- setup/common.sh@33 -- # echo 512 00:04:12.873 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.873 00:16:28 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:12.873 00:16:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.873 00:16:28 -- setup/hugepages.sh@27 -- # local node 00:04:12.873 00:16:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.873 00:16:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.873 00:16:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.873 00:16:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.873 00:16:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.873 00:16:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.873 00:16:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.873 00:16:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.873 00:16:28 -- setup/common.sh@18 -- # local node=0 00:04:12.873 00:16:28 -- setup/common.sh@19 -- # local var val 00:04:12.873 00:16:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.873 00:16:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.873 00:16:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.873 00:16:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.873 00:16:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.873 00:16:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.873 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 9152816 kB' 'MemUsed: 3086304 kB' 'SwapCached: 0 kB' 'Active: 455948 kB' 'Inactive: 1253064 kB' 'Active(anon): 128116 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1591380 kB' 'Mapped: 50808 kB' 'AnonPages: 119264 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155504 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # continue 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.874 00:16:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.874 00:16:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.874 00:16:28 -- setup/common.sh@33 -- # echo 0 00:04:12.874 00:16:28 -- setup/common.sh@33 -- # return 0 00:04:12.874 00:16:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.874 00:16:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.874 00:16:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.874 node0=512 expecting 512 00:04:12.874 00:16:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.874 00:16:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.874 00:16:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.874 00:04:12.874 real 0m0.580s 00:04:12.874 user 0m0.274s 00:04:12.874 sys 0m0.319s 00:04:12.874 00:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.874 00:16:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.874 ************************************ 00:04:12.874 END TEST custom_alloc 00:04:12.874 ************************************ 00:04:13.133 00:16:28 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:13.133 00:16:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.133 00:16:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.133 00:16:28 -- common/autotest_common.sh@10 -- # set +x 00:04:13.133 ************************************ 00:04:13.133 START TEST no_shrink_alloc 00:04:13.133 ************************************ 00:04:13.133 00:16:28 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:13.133 00:16:28 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:13.133 00:16:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.133 00:16:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:13.133 00:16:28 -- setup/hugepages.sh@51 -- # shift 00:04:13.133 00:16:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:13.133 00:16:28 -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.133 00:16:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.133 00:16:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.133 00:16:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:13.133 00:16:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:13.133 00:16:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.133 00:16:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.133 00:16:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:13.133 00:16:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.133 00:16:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.133 00:16:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:13.133 00:16:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.133 00:16:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:13.133 00:16:28 -- setup/hugepages.sh@73 -- # return 0 00:04:13.133 00:16:28 -- setup/hugepages.sh@198 -- # setup output 00:04:13.133 00:16:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.133 00:16:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.395 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.395 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.395 00:16:29 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:13.395 00:16:29 -- setup/hugepages.sh@89 -- # local node 00:04:13.395 00:16:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.395 00:16:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.395 00:16:29 -- setup/hugepages.sh@92 -- # local surp 00:04:13.395 00:16:29 -- setup/hugepages.sh@93 -- # local resv 00:04:13.395 00:16:29 -- setup/hugepages.sh@94 -- # local anon 00:04:13.395 00:16:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.395 00:16:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.395 00:16:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.395 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.395 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.395 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.395 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.395 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.395 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.395 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.395 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.395 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.395 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100984 kB' 'MemAvailable: 9476024 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456692 kB' 'Inactive: 1253064 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 50924 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155484 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93300 kB' 'KernelStack: 6488 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 323060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.395 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.396 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.396 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.396 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.396 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.396 00:16:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.396 00:16:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.396 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.396 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.396 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.396 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.396 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.396 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.397 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.397 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.397 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100984 kB' 'MemAvailable: 9476024 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456448 kB' 'Inactive: 1253064 kB' 'Active(anon): 128616 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119548 kB' 'Mapped: 50924 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155476 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93292 kB' 'KernelStack: 6456 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.397 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.397 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.398 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.398 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.398 00:16:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.398 00:16:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.398 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.398 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.398 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.398 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.398 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.398 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.398 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.398 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.398 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100984 kB' 'MemAvailable: 9476024 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 456180 kB' 'Inactive: 1253064 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119532 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155472 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93288 kB' 'KernelStack: 6448 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.398 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.398 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.399 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.399 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.399 00:16:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.399 nr_hugepages=1024 00:04:13.399 00:16:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.399 resv_hugepages=0 00:04:13.399 00:16:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.399 surplus_hugepages=0 00:04:13.399 00:16:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.399 anon_hugepages=0 00:04:13.399 00:16:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.399 00:16:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.399 00:16:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.399 00:16:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.399 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.399 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.399 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.399 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.399 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.399 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.399 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.399 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.399 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100984 kB' 'MemAvailable: 9476024 kB' 'Buffers: 2684 kB' 'Cached: 1588696 kB' 'SwapCached: 0 kB' 'Active: 455936 kB' 'Inactive: 1253064 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119292 kB' 'Mapped: 50808 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155472 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93288 kB' 'KernelStack: 6448 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.399 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.399 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.659 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.659 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.660 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.660 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.661 00:16:29 -- setup/common.sh@33 -- # echo 1024 00:04:13.661 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.661 00:16:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.661 00:16:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.661 00:16:29 -- setup/hugepages.sh@27 -- # local node 00:04:13.661 00:16:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.661 00:16:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.661 00:16:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:13.661 00:16:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.661 00:16:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.661 00:16:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.661 00:16:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.661 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.661 00:16:29 -- setup/common.sh@18 -- # local node=0 00:04:13.661 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.661 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.661 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.661 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.661 00:16:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.661 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.661 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8101504 kB' 'MemUsed: 4137616 kB' 'SwapCached: 0 kB' 'Active: 455936 kB' 'Inactive: 1253064 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1591380 kB' 'Mapped: 50808 kB' 'AnonPages: 119292 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155472 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93288 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.661 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.661 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.662 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.662 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.662 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.662 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.662 00:16:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.662 00:16:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.662 00:16:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.662 00:16:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.662 node0=1024 expecting 1024 00:04:13.662 00:16:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.662 00:16:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.662 00:16:29 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:13.662 00:16:29 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:13.662 00:16:29 -- setup/hugepages.sh@202 -- # setup output 00:04:13.662 00:16:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.662 00:16:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:13.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:13.925 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.925 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:13.925 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:13.925 00:16:29 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:13.925 00:16:29 -- setup/hugepages.sh@89 -- # local node 00:04:13.925 00:16:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.925 00:16:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.925 00:16:29 -- setup/hugepages.sh@92 -- # local surp 00:04:13.925 00:16:29 -- setup/hugepages.sh@93 -- # local resv 00:04:13.925 00:16:29 -- setup/hugepages.sh@94 -- # local anon 00:04:13.925 00:16:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.925 00:16:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.925 00:16:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.925 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.925 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.925 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.925 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.925 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.925 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.925 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.925 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100788 kB' 'MemAvailable: 9475832 kB' 'Buffers: 2684 kB' 'Cached: 1588700 kB' 'SwapCached: 0 kB' 'Active: 456356 kB' 'Inactive: 1253068 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119660 kB' 'Mapped: 51208 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155472 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93288 kB' 'KernelStack: 6456 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.925 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.925 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.926 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.926 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.926 00:16:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:13.926 00:16:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.926 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.926 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.926 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.926 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.926 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.926 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.926 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.926 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.926 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100884 kB' 'MemAvailable: 9475928 kB' 'Buffers: 2684 kB' 'Cached: 1588700 kB' 'SwapCached: 0 kB' 'Active: 456048 kB' 'Inactive: 1253068 kB' 'Active(anon): 128216 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119356 kB' 'Mapped: 50936 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155452 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93268 kB' 'KernelStack: 6376 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.926 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.926 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.927 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.927 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.927 00:16:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:13.927 00:16:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.927 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.927 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.927 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.927 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.927 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.927 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.927 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.927 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.927 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100884 kB' 'MemAvailable: 9475928 kB' 'Buffers: 2684 kB' 'Cached: 1588700 kB' 'SwapCached: 0 kB' 'Active: 455912 kB' 'Inactive: 1253068 kB' 'Active(anon): 128080 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119220 kB' 'Mapped: 50816 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155460 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93276 kB' 'KernelStack: 6384 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.927 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.927 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.928 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.928 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.929 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:13.929 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.929 00:16:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:13.929 nr_hugepages=1024 00:04:13.929 00:16:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.929 resv_hugepages=0 00:04:13.929 00:16:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.929 surplus_hugepages=0 00:04:13.929 00:16:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.929 anon_hugepages=0 00:04:13.929 00:16:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.929 00:16:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.929 00:16:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.929 00:16:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.929 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.929 00:16:29 -- setup/common.sh@18 -- # local node= 00:04:13.929 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:13.929 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:13.929 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.929 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.929 00:16:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.929 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.929 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100884 kB' 'MemAvailable: 9475928 kB' 'Buffers: 2684 kB' 'Cached: 1588700 kB' 'SwapCached: 0 kB' 'Active: 455912 kB' 'Inactive: 1253068 kB' 'Active(anon): 128080 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119480 kB' 'Mapped: 50816 kB' 'Shmem: 10484 kB' 'KReclaimable: 62184 kB' 'Slab: 155460 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93276 kB' 'KernelStack: 6384 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.929 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.929 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # continue 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:13.930 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:13.930 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.930 00:16:29 -- setup/common.sh@33 -- # echo 1024 00:04:13.930 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:13.930 00:16:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.930 00:16:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.930 00:16:29 -- setup/hugepages.sh@27 -- # local node 00:04:13.930 00:16:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.190 00:16:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.190 00:16:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:14.190 00:16:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.190 00:16:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.190 00:16:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.190 00:16:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.190 00:16:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.190 00:16:29 -- setup/common.sh@18 -- # local node=0 00:04:14.190 00:16:29 -- setup/common.sh@19 -- # local var val 00:04:14.190 00:16:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.190 00:16:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.190 00:16:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.190 00:16:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.190 00:16:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.190 00:16:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 8100884 kB' 'MemUsed: 4138236 kB' 'SwapCached: 0 kB' 'Active: 455828 kB' 'Inactive: 1253068 kB' 'Active(anon): 127996 kB' 'Inactive(anon): 0 kB' 'Active(file): 327832 kB' 'Inactive(file): 1253068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1591384 kB' 'Mapped: 50816 kB' 'AnonPages: 119128 kB' 'Shmem: 10484 kB' 'KernelStack: 6452 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62184 kB' 'Slab: 155468 kB' 'SReclaimable: 62184 kB' 'SUnreclaim: 93284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.190 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.190 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # continue 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.191 00:16:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.191 00:16:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.191 00:16:29 -- setup/common.sh@33 -- # echo 0 00:04:14.191 00:16:29 -- setup/common.sh@33 -- # return 0 00:04:14.191 00:16:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.191 00:16:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.191 00:16:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.191 00:16:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.191 node0=1024 expecting 1024 00:04:14.191 00:16:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:14.191 00:16:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:14.191 00:04:14.191 real 0m1.040s 00:04:14.191 user 0m0.539s 00:04:14.191 sys 0m0.565s 00:04:14.191 00:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.191 00:16:29 -- common/autotest_common.sh@10 -- # set +x 00:04:14.191 ************************************ 00:04:14.191 END TEST no_shrink_alloc 00:04:14.191 ************************************ 00:04:14.191 00:16:29 -- setup/hugepages.sh@217 -- # clear_hp 00:04:14.191 00:16:29 -- setup/hugepages.sh@37 -- # local node hp 00:04:14.191 00:16:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.191 00:16:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.191 00:16:29 -- setup/hugepages.sh@41 -- # echo 0 00:04:14.191 00:16:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.191 00:16:29 -- setup/hugepages.sh@41 -- # echo 0 00:04:14.191 00:16:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.191 00:16:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.191 ************************************ 00:04:14.191 END TEST hugepages 00:04:14.191 ************************************ 00:04:14.191 00:04:14.191 real 0m4.737s 00:04:14.191 user 0m2.272s 00:04:14.191 sys 0m2.516s 00:04:14.191 00:16:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.191 00:16:29 -- common/autotest_common.sh@10 -- # set +x 00:04:14.191 00:16:29 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.191 00:16:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.191 00:16:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.191 00:16:29 -- common/autotest_common.sh@10 -- # set +x 00:04:14.191 ************************************ 00:04:14.191 START TEST driver 00:04:14.191 ************************************ 00:04:14.191 00:16:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:14.191 * Looking for test storage... 00:04:14.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.191 00:16:29 -- setup/driver.sh@68 -- # setup reset 00:04:14.191 00:16:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.191 00:16:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.762 00:16:30 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.762 00:16:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.762 00:16:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.762 00:16:30 -- common/autotest_common.sh@10 -- # set +x 00:04:14.762 ************************************ 00:04:14.762 START TEST guess_driver 00:04:14.762 ************************************ 00:04:14.762 00:16:30 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:14.762 00:16:30 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.762 00:16:30 -- setup/driver.sh@47 -- # local fail=0 00:04:14.762 00:16:30 -- setup/driver.sh@49 -- # pick_driver 00:04:14.762 00:16:30 -- setup/driver.sh@36 -- # vfio 00:04:14.762 00:16:30 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.762 00:16:30 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.762 00:16:30 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.762 00:16:30 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.762 00:16:30 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:14.762 00:16:30 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:14.762 00:16:30 -- setup/driver.sh@32 -- # return 1 00:04:14.762 00:16:30 -- setup/driver.sh@38 -- # uio 00:04:14.762 00:16:30 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:14.762 00:16:30 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:14.762 00:16:30 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:14.762 00:16:30 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:14.762 00:16:30 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:14.762 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:14.762 00:16:30 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:14.762 Looking for driver=uio_pci_generic 00:04:14.762 00:16:30 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:14.762 00:16:30 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.762 00:16:30 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:14.762 00:16:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.762 00:16:30 -- setup/driver.sh@45 -- # setup output config 00:04:14.762 00:16:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.762 00:16:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.699 00:16:31 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:15.699 00:16:31 -- setup/driver.sh@58 -- # continue 00:04:15.699 00:16:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.699 00:16:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.699 00:16:31 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.699 00:16:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.699 00:16:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.699 00:16:31 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.699 00:16:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.699 00:16:31 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.699 00:16:31 -- setup/driver.sh@65 -- # setup reset 00:04:15.699 00:16:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.699 00:16:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.267 00:04:16.267 real 0m1.419s 00:04:16.267 user 0m0.583s 00:04:16.267 sys 0m0.843s 00:04:16.267 00:16:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.267 00:16:31 -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 ************************************ 00:04:16.267 END TEST guess_driver 00:04:16.267 ************************************ 00:04:16.267 00:04:16.267 real 0m2.110s 00:04:16.267 user 0m0.823s 00:04:16.267 sys 0m1.354s 00:04:16.267 00:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.267 00:16:32 -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 ************************************ 00:04:16.267 END TEST driver 00:04:16.267 ************************************ 00:04:16.267 00:16:32 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:16.267 00:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.267 00:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.267 00:16:32 -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 ************************************ 00:04:16.267 START TEST devices 00:04:16.267 ************************************ 00:04:16.267 00:16:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:16.527 * Looking for test storage... 00:04:16.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:16.527 00:16:32 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:16.527 00:16:32 -- setup/devices.sh@192 -- # setup reset 00:04:16.527 00:16:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.527 00:16:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.095 00:16:32 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:17.095 00:16:32 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:17.095 00:16:32 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:17.095 00:16:32 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:17.095 00:16:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.095 00:16:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:17.095 00:16:32 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:17.095 00:16:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.095 00:16:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:04:17.095 00:16:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:04:17.095 00:16:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.095 00:16:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:04:17.095 00:16:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:04:17.095 00:16:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:17.095 00:16:32 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:04:17.095 00:16:32 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:04:17.095 00:16:32 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:17.095 00:16:32 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:17.095 00:16:32 -- setup/devices.sh@196 -- # blocks=() 00:04:17.095 00:16:32 -- setup/devices.sh@196 -- # declare -a blocks 00:04:17.095 00:16:32 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:17.095 00:16:32 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:17.095 00:16:32 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:17.095 00:16:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.095 00:16:32 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:17.095 00:16:32 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.095 00:16:32 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:17.096 00:16:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:17.096 00:16:32 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:17.096 00:16:32 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:17.096 00:16:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:17.096 No valid GPT data, bailing 00:04:17.355 00:16:32 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.355 00:16:32 -- scripts/common.sh@393 -- # pt= 00:04:17.355 00:16:32 -- scripts/common.sh@394 -- # return 1 00:04:17.355 00:16:32 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:17.355 00:16:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:17.355 00:16:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:17.355 00:16:32 -- setup/common.sh@80 -- # echo 5368709120 00:04:17.355 00:16:32 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:17.355 00:16:32 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.355 00:16:32 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:17.355 00:16:32 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.355 00:16:32 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:17.355 00:16:32 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:17.355 00:16:32 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:17.355 00:16:32 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:17.355 00:16:32 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:17.355 00:16:32 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:17.355 00:16:32 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:17.355 No valid GPT data, bailing 00:04:17.355 00:16:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:17.355 00:16:33 -- scripts/common.sh@393 -- # pt= 00:04:17.355 00:16:33 -- scripts/common.sh@394 -- # return 1 00:04:17.355 00:16:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:17.355 00:16:33 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:17.355 00:16:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:17.355 00:16:33 -- setup/common.sh@80 -- # echo 4294967296 00:04:17.355 00:16:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:17.355 00:16:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.355 00:16:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:17.355 00:16:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.355 00:16:33 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:17.355 00:16:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:17.355 00:16:33 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:17.355 00:16:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:17.355 00:16:33 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:17.355 00:16:33 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:17.355 00:16:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:17.355 No valid GPT data, bailing 00:04:17.355 00:16:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:17.355 00:16:33 -- scripts/common.sh@393 -- # pt= 00:04:17.355 00:16:33 -- scripts/common.sh@394 -- # return 1 00:04:17.355 00:16:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:17.355 00:16:33 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:17.355 00:16:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:17.355 00:16:33 -- setup/common.sh@80 -- # echo 4294967296 00:04:17.355 00:16:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:17.355 00:16:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.355 00:16:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:17.355 00:16:33 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.355 00:16:33 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:17.355 00:16:33 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:17.355 00:16:33 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:17.355 00:16:33 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:17.355 00:16:33 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:17.355 00:16:33 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:17.355 00:16:33 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:17.355 No valid GPT data, bailing 00:04:17.355 00:16:33 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:17.616 00:16:33 -- scripts/common.sh@393 -- # pt= 00:04:17.616 00:16:33 -- scripts/common.sh@394 -- # return 1 00:04:17.616 00:16:33 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:17.616 00:16:33 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:17.616 00:16:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:17.616 00:16:33 -- setup/common.sh@80 -- # echo 4294967296 00:04:17.616 00:16:33 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:17.616 00:16:33 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.616 00:16:33 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:17.616 00:16:33 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:17.616 00:16:33 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:17.616 00:16:33 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:17.616 00:16:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.616 00:16:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.616 00:16:33 -- common/autotest_common.sh@10 -- # set +x 00:04:17.616 ************************************ 00:04:17.616 START TEST nvme_mount 00:04:17.616 ************************************ 00:04:17.616 00:16:33 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:17.616 00:16:33 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:17.616 00:16:33 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:17.616 00:16:33 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:17.616 00:16:33 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:17.616 00:16:33 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:17.616 00:16:33 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.616 00:16:33 -- setup/common.sh@40 -- # local part_no=1 00:04:17.616 00:16:33 -- setup/common.sh@41 -- # local size=1073741824 00:04:17.616 00:16:33 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.616 00:16:33 -- setup/common.sh@44 -- # parts=() 00:04:17.616 00:16:33 -- setup/common.sh@44 -- # local parts 00:04:17.616 00:16:33 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.616 00:16:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.616 00:16:33 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.616 00:16:33 -- setup/common.sh@46 -- # (( part++ )) 00:04:17.616 00:16:33 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.616 00:16:33 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:17.616 00:16:33 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.616 00:16:33 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:18.571 Creating new GPT entries in memory. 00:04:18.571 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.571 other utilities. 00:04:18.571 00:16:34 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.571 00:16:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.571 00:16:34 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.571 00:16:34 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.571 00:16:34 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:19.508 Creating new GPT entries in memory. 00:04:19.508 The operation has completed successfully. 00:04:19.508 00:16:35 -- setup/common.sh@57 -- # (( part++ )) 00:04:19.508 00:16:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.508 00:16:35 -- setup/common.sh@62 -- # wait 52133 00:04:19.508 00:16:35 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.508 00:16:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:19.508 00:16:35 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.508 00:16:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:19.508 00:16:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:19.508 00:16:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.767 00:16:35 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.767 00:16:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.767 00:16:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:19.767 00:16:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:19.767 00:16:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:19.767 00:16:35 -- setup/devices.sh@53 -- # local found=0 00:04:19.767 00:16:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.767 00:16:35 -- setup/devices.sh@56 -- # : 00:04:19.767 00:16:35 -- setup/devices.sh@59 -- # local pci status 00:04:19.767 00:16:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.767 00:16:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.767 00:16:35 -- setup/devices.sh@47 -- # setup output config 00:04:19.767 00:16:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.767 00:16:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.767 00:16:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.767 00:16:35 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:19.767 00:16:35 -- setup/devices.sh@63 -- # found=1 00:04:19.767 00:16:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.767 00:16:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.767 00:16:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.024 00:16:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.025 00:16:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.284 00:16:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.284 00:16:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.284 00:16:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.284 00:16:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:20.284 00:16:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.284 00:16:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.284 00:16:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.284 00:16:35 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:20.284 00:16:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.284 00:16:35 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.284 00:16:36 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.284 00:16:36 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.284 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.284 00:16:36 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.284 00:16:36 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.543 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.543 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.543 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.543 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.543 00:16:36 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:20.543 00:16:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:20.543 00:16:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.543 00:16:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.543 00:16:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.543 00:16:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.543 00:16:36 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.543 00:16:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:20.543 00:16:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.543 00:16:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.543 00:16:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:20.543 00:16:36 -- setup/devices.sh@53 -- # local found=0 00:04:20.543 00:16:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.543 00:16:36 -- setup/devices.sh@56 -- # : 00:04:20.543 00:16:36 -- setup/devices.sh@59 -- # local pci status 00:04:20.543 00:16:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.543 00:16:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:20.543 00:16:36 -- setup/devices.sh@47 -- # setup output config 00:04:20.543 00:16:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.543 00:16:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.801 00:16:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.801 00:16:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:20.801 00:16:36 -- setup/devices.sh@63 -- # found=1 00:04:20.801 00:16:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.801 00:16:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.801 00:16:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.060 00:16:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.060 00:16:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.060 00:16:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.060 00:16:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.318 00:16:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.318 00:16:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:21.318 00:16:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.318 00:16:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.318 00:16:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.318 00:16:36 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.318 00:16:36 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:21.318 00:16:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:21.318 00:16:36 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:21.318 00:16:36 -- setup/devices.sh@50 -- # local mount_point= 00:04:21.318 00:16:36 -- setup/devices.sh@51 -- # local test_file= 00:04:21.318 00:16:36 -- setup/devices.sh@53 -- # local found=0 00:04:21.318 00:16:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:21.318 00:16:36 -- setup/devices.sh@59 -- # local pci status 00:04:21.318 00:16:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.318 00:16:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:21.318 00:16:36 -- setup/devices.sh@47 -- # setup output config 00:04:21.318 00:16:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.318 00:16:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.577 00:16:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.577 00:16:37 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:21.577 00:16:37 -- setup/devices.sh@63 -- # found=1 00:04:21.577 00:16:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.577 00:16:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.577 00:16:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.836 00:16:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.836 00:16:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.836 00:16:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:21.836 00:16:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.836 00:16:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.836 00:16:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.836 00:16:37 -- setup/devices.sh@68 -- # return 0 00:04:21.836 00:16:37 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:21.836 00:16:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.836 00:16:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.836 00:16:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.836 00:16:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:21.836 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.836 00:04:21.836 real 0m4.449s 00:04:21.836 user 0m0.988s 00:04:21.836 sys 0m1.152s 00:04:21.836 00:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.836 00:16:37 -- common/autotest_common.sh@10 -- # set +x 00:04:21.836 ************************************ 00:04:21.836 END TEST nvme_mount 00:04:21.836 ************************************ 00:04:22.095 00:16:37 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:22.095 00:16:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.095 00:16:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.095 00:16:37 -- common/autotest_common.sh@10 -- # set +x 00:04:22.095 ************************************ 00:04:22.095 START TEST dm_mount 00:04:22.095 ************************************ 00:04:22.095 00:16:37 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:22.095 00:16:37 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:22.095 00:16:37 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:22.095 00:16:37 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:22.095 00:16:37 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:22.095 00:16:37 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.095 00:16:37 -- setup/common.sh@40 -- # local part_no=2 00:04:22.095 00:16:37 -- setup/common.sh@41 -- # local size=1073741824 00:04:22.095 00:16:37 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.095 00:16:37 -- setup/common.sh@44 -- # parts=() 00:04:22.095 00:16:37 -- setup/common.sh@44 -- # local parts 00:04:22.095 00:16:37 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.095 00:16:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.095 00:16:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.095 00:16:37 -- setup/common.sh@46 -- # (( part++ )) 00:04:22.095 00:16:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.095 00:16:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.095 00:16:37 -- setup/common.sh@46 -- # (( part++ )) 00:04:22.095 00:16:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.095 00:16:37 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:22.095 00:16:37 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:22.095 00:16:37 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:23.031 Creating new GPT entries in memory. 00:04:23.031 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:23.031 other utilities. 00:04:23.031 00:16:38 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:23.031 00:16:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.031 00:16:38 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.031 00:16:38 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.031 00:16:38 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:23.966 Creating new GPT entries in memory. 00:04:23.966 The operation has completed successfully. 00:04:23.966 00:16:39 -- setup/common.sh@57 -- # (( part++ )) 00:04:23.966 00:16:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.966 00:16:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.966 00:16:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.966 00:16:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:25.343 The operation has completed successfully. 00:04:25.343 00:16:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.343 00:16:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.343 00:16:40 -- setup/common.sh@62 -- # wait 52588 00:04:25.343 00:16:40 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:25.343 00:16:40 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.343 00:16:40 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.343 00:16:40 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:25.343 00:16:40 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:25.343 00:16:40 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:25.343 00:16:40 -- setup/devices.sh@161 -- # break 00:04:25.343 00:16:40 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:25.343 00:16:40 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:25.343 00:16:40 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:25.343 00:16:40 -- setup/devices.sh@166 -- # dm=dm-0 00:04:25.343 00:16:40 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:25.343 00:16:40 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:25.343 00:16:40 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.343 00:16:40 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:25.343 00:16:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.343 00:16:40 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:25.343 00:16:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:25.343 00:16:40 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.343 00:16:40 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.343 00:16:40 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:25.343 00:16:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:25.343 00:16:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.343 00:16:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.343 00:16:40 -- setup/devices.sh@53 -- # local found=0 00:04:25.343 00:16:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.343 00:16:40 -- setup/devices.sh@56 -- # : 00:04:25.343 00:16:40 -- setup/devices.sh@59 -- # local pci status 00:04:25.343 00:16:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.343 00:16:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:25.343 00:16:40 -- setup/devices.sh@47 -- # setup output config 00:04:25.343 00:16:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.343 00:16:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:25.343 00:16:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.343 00:16:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:25.343 00:16:41 -- setup/devices.sh@63 -- # found=1 00:04:25.343 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.343 00:16:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.343 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.601 00:16:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.601 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.860 00:16:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:25.860 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.860 00:16:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.860 00:16:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:25.860 00:16:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.860 00:16:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:25.860 00:16:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:25.860 00:16:41 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:25.860 00:16:41 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:25.860 00:16:41 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:25.860 00:16:41 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:25.860 00:16:41 -- setup/devices.sh@50 -- # local mount_point= 00:04:25.860 00:16:41 -- setup/devices.sh@51 -- # local test_file= 00:04:25.860 00:16:41 -- setup/devices.sh@53 -- # local found=0 00:04:25.860 00:16:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.860 00:16:41 -- setup/devices.sh@59 -- # local pci status 00:04:25.860 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.860 00:16:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:25.860 00:16:41 -- setup/devices.sh@47 -- # setup output config 00:04:25.860 00:16:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.860 00:16:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.119 00:16:41 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.119 00:16:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:26.119 00:16:41 -- setup/devices.sh@63 -- # found=1 00:04:26.119 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.119 00:16:41 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.119 00:16:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.378 00:16:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.378 00:16:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.378 00:16:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:26.378 00:16:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.378 00:16:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.378 00:16:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:26.378 00:16:42 -- setup/devices.sh@68 -- # return 0 00:04:26.378 00:16:42 -- setup/devices.sh@187 -- # cleanup_dm 00:04:26.378 00:16:42 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.378 00:16:42 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.378 00:16:42 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:26.378 00:16:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.378 00:16:42 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:26.637 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:26.637 00:16:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.637 00:16:42 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:26.637 00:04:26.637 real 0m4.527s 00:04:26.637 user 0m0.657s 00:04:26.637 sys 0m0.793s 00:04:26.637 00:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.637 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:04:26.637 ************************************ 00:04:26.637 END TEST dm_mount 00:04:26.637 ************************************ 00:04:26.637 00:16:42 -- setup/devices.sh@1 -- # cleanup 00:04:26.637 00:16:42 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:26.637 00:16:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.637 00:16:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.637 00:16:42 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:26.637 00:16:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.637 00:16:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:26.895 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.895 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:26.895 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:26.895 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:26.895 00:16:42 -- setup/devices.sh@12 -- # cleanup_dm 00:04:26.895 00:16:42 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:26.895 00:16:42 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:26.895 00:16:42 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:26.895 00:16:42 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:26.895 00:16:42 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:26.895 00:16:42 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:26.895 00:04:26.895 real 0m10.522s 00:04:26.895 user 0m2.320s 00:04:26.895 sys 0m2.531s 00:04:26.895 ************************************ 00:04:26.895 END TEST devices 00:04:26.895 ************************************ 00:04:26.895 00:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.895 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:04:26.895 00:04:26.896 real 0m22.040s 00:04:26.896 user 0m7.411s 00:04:26.896 sys 0m9.047s 00:04:26.896 00:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.896 00:16:42 -- common/autotest_common.sh@10 -- # set +x 00:04:26.896 ************************************ 00:04:26.896 END TEST setup.sh 00:04:26.896 ************************************ 00:04:26.896 00:16:42 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.154 Hugepages 00:04:27.154 node hugesize free / total 00:04:27.154 node0 1048576kB 0 / 0 00:04:27.154 node0 2048kB 2048 / 2048 00:04:27.154 00:04:27.154 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.154 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:27.154 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:27.412 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:27.412 00:16:43 -- spdk/autotest.sh@141 -- # uname -s 00:04:27.412 00:16:43 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:27.412 00:16:43 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:27.412 00:16:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.979 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.979 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.238 00:16:43 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:29.174 00:16:44 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:29.174 00:16:44 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:29.174 00:16:44 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:29.174 00:16:44 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:29.174 00:16:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:29.174 00:16:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:29.174 00:16:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.174 00:16:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:29.174 00:16:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.174 00:16:44 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:29.174 00:16:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:29.174 00:16:44 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.433 Waiting for block devices as requested 00:04:29.692 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.692 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.692 00:16:45 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:29.692 00:16:45 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:29.692 00:16:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:29.692 00:16:45 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:29.692 00:16:45 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:29.692 00:16:45 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1542 -- # continue 00:04:29.692 00:16:45 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:29.692 00:16:45 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:29.692 00:16:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:04:29.692 00:16:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:29.692 00:16:45 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:29.692 00:16:45 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:29.692 00:16:45 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:29.692 00:16:45 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:29.692 00:16:45 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:29.692 00:16:45 -- common/autotest_common.sh@1542 -- # continue 00:04:29.692 00:16:45 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:29.692 00:16:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:29.692 00:16:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.951 00:16:45 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:29.951 00:16:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:29.951 00:16:45 -- common/autotest_common.sh@10 -- # set +x 00:04:29.951 00:16:45 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.519 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.519 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.778 00:16:46 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:30.778 00:16:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:30.778 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.778 00:16:46 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:30.778 00:16:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:30.778 00:16:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:30.778 00:16:46 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:30.778 00:16:46 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:30.778 00:16:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:30.778 00:16:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:30.778 00:16:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:30.778 00:16:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.778 00:16:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.778 00:16:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:30.778 00:16:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:30.778 00:16:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:30.778 00:16:46 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:30.778 00:16:46 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:30.778 00:16:46 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:30.778 00:16:46 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.778 00:16:46 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:30.778 00:16:46 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:30.778 00:16:46 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:30.778 00:16:46 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:30.778 00:16:46 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:30.778 00:16:46 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:30.778 00:16:46 -- common/autotest_common.sh@1578 -- # return 0 00:04:30.778 00:16:46 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:30.778 00:16:46 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:30.778 00:16:46 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:30.778 00:16:46 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:30.778 00:16:46 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:30.778 00:16:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:30.778 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.778 00:16:46 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.778 00:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.778 00:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.778 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.778 ************************************ 00:04:30.778 START TEST env 00:04:30.778 ************************************ 00:04:30.778 00:16:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:30.778 * Looking for test storage... 00:04:30.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:30.778 00:16:46 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:30.778 00:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.778 00:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.778 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:04:30.778 ************************************ 00:04:30.778 START TEST env_memory 00:04:30.778 ************************************ 00:04:30.778 00:16:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.037 00:04:31.037 00:04:31.037 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.037 http://cunit.sourceforge.net/ 00:04:31.037 00:04:31.037 00:04:31.037 Suite: memory 00:04:31.037 Test: alloc and free memory map ...[2024-09-29 00:16:46.664432] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.037 passed 00:04:31.037 Test: mem map translation ...[2024-09-29 00:16:46.695044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.037 [2024-09-29 00:16:46.695086] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.037 [2024-09-29 00:16:46.695151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.037 [2024-09-29 00:16:46.695162] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:31.037 passed 00:04:31.037 Test: mem map registration ...[2024-09-29 00:16:46.758759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:31.037 [2024-09-29 00:16:46.758788] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:31.037 passed 00:04:31.037 Test: mem map adjacent registrations ...passed 00:04:31.037 00:04:31.037 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.037 suites 1 1 n/a 0 0 00:04:31.037 tests 4 4 4 0 0 00:04:31.037 asserts 152 152 152 0 n/a 00:04:31.037 00:04:31.037 Elapsed time = 0.214 seconds 00:04:31.037 00:04:31.037 real 0m0.229s 00:04:31.037 user 0m0.216s 00:04:31.037 sys 0m0.010s 00:04:31.037 00:16:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.037 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.037 ************************************ 00:04:31.037 END TEST env_memory 00:04:31.037 ************************************ 00:04:31.297 00:16:46 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:31.297 00:16:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.297 00:16:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.297 00:16:46 -- common/autotest_common.sh@10 -- # set +x 00:04:31.297 ************************************ 00:04:31.297 START TEST env_vtophys 00:04:31.297 ************************************ 00:04:31.297 00:16:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:31.297 EAL: lib.eal log level changed from notice to debug 00:04:31.297 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 1 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 2 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 3 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 4 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 5 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 6 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 7 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 8 as core 0 on socket 0 00:04:31.297 EAL: Detected lcore 9 as core 0 on socket 0 00:04:31.297 EAL: Maximum logical cores by configuration: 128 00:04:31.297 EAL: Detected CPU lcores: 10 00:04:31.297 EAL: Detected NUMA nodes: 1 00:04:31.297 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:31.297 EAL: Detected shared linkage of DPDK 00:04:31.297 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.297 EAL: Selected IOVA mode 'PA' 00:04:31.297 EAL: Probing VFIO support... 00:04:31.297 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:31.297 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:31.297 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.297 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.297 EAL: Setting up physically contiguous memory... 00:04:31.297 EAL: Setting maximum number of open files to 524288 00:04:31.297 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.297 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.297 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.297 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.297 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.297 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.297 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.297 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.297 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.297 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.297 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.297 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.297 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.297 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.297 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.297 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.297 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.297 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.297 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.297 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.297 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.297 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.297 EAL: Hugepages will be freed exactly as allocated. 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: TSC frequency is ~2200000 KHz 00:04:31.297 EAL: Main lcore 0 is ready (tid=7eff596e0a00;cpuset=[0]) 00:04:31.297 EAL: Trying to obtain current memory policy. 00:04:31.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.297 EAL: Restoring previous memory policy: 0 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.297 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:31.297 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:31.297 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.297 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:31.297 00:04:31.297 00:04:31.297 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.297 http://cunit.sourceforge.net/ 00:04:31.297 00:04:31.297 00:04:31.297 Suite: components_suite 00:04:31.297 Test: vtophys_malloc_test ...passed 00:04:31.297 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:31.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.297 EAL: Restoring previous memory policy: 4 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was expanded by 4MB 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was shrunk by 4MB 00:04:31.297 EAL: Trying to obtain current memory policy. 00:04:31.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.297 EAL: Restoring previous memory policy: 4 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was expanded by 6MB 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was shrunk by 6MB 00:04:31.297 EAL: Trying to obtain current memory policy. 00:04:31.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.297 EAL: Restoring previous memory policy: 4 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was expanded by 10MB 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was shrunk by 10MB 00:04:31.297 EAL: Trying to obtain current memory policy. 00:04:31.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.297 EAL: Restoring previous memory policy: 4 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was expanded by 18MB 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was shrunk by 18MB 00:04:31.297 EAL: Trying to obtain current memory policy. 00:04:31.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.297 EAL: Restoring previous memory policy: 4 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.297 EAL: request: mp_malloc_sync 00:04:31.297 EAL: No shared files mode enabled, IPC is disabled 00:04:31.297 EAL: Heap on socket 0 was expanded by 34MB 00:04:31.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.298 EAL: request: mp_malloc_sync 00:04:31.298 EAL: No shared files mode enabled, IPC is disabled 00:04:31.298 EAL: Heap on socket 0 was shrunk by 34MB 00:04:31.298 EAL: Trying to obtain current memory policy. 00:04:31.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.298 EAL: Restoring previous memory policy: 4 00:04:31.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.298 EAL: request: mp_malloc_sync 00:04:31.298 EAL: No shared files mode enabled, IPC is disabled 00:04:31.298 EAL: Heap on socket 0 was expanded by 66MB 00:04:31.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.298 EAL: request: mp_malloc_sync 00:04:31.298 EAL: No shared files mode enabled, IPC is disabled 00:04:31.298 EAL: Heap on socket 0 was shrunk by 66MB 00:04:31.298 EAL: Trying to obtain current memory policy. 00:04:31.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.298 EAL: Restoring previous memory policy: 4 00:04:31.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.298 EAL: request: mp_malloc_sync 00:04:31.298 EAL: No shared files mode enabled, IPC is disabled 00:04:31.298 EAL: Heap on socket 0 was expanded by 130MB 00:04:31.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.557 EAL: request: mp_malloc_sync 00:04:31.557 EAL: No shared files mode enabled, IPC is disabled 00:04:31.557 EAL: Heap on socket 0 was shrunk by 130MB 00:04:31.557 EAL: Trying to obtain current memory policy. 00:04:31.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.557 EAL: Restoring previous memory policy: 4 00:04:31.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.557 EAL: request: mp_malloc_sync 00:04:31.557 EAL: No shared files mode enabled, IPC is disabled 00:04:31.557 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.557 EAL: request: mp_malloc_sync 00:04:31.557 EAL: No shared files mode enabled, IPC is disabled 00:04:31.557 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.557 EAL: Trying to obtain current memory policy. 00:04:31.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.557 EAL: Restoring previous memory policy: 4 00:04:31.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.557 EAL: request: mp_malloc_sync 00:04:31.557 EAL: No shared files mode enabled, IPC is disabled 00:04:31.557 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.831 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.831 EAL: request: mp_malloc_sync 00:04:31.831 EAL: No shared files mode enabled, IPC is disabled 00:04:31.831 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.831 EAL: Trying to obtain current memory policy. 00:04:31.831 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.831 EAL: Restoring previous memory policy: 4 00:04:31.831 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.831 EAL: request: mp_malloc_sync 00:04:31.832 EAL: No shared files mode enabled, IPC is disabled 00:04:31.832 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.101 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.101 passed 00:04:32.101 00:04:32.101 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.101 suites 1 1 n/a 0 0 00:04:32.101 tests 2 2 2 0 0 00:04:32.101 asserts 5316 5316 5316 0 n/a 00:04:32.101 00:04:32.101 Elapsed time = 0.740 seconds 00:04:32.101 EAL: request: mp_malloc_sync 00:04:32.101 EAL: No shared files mode enabled, IPC is disabled 00:04:32.101 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:32.102 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.102 EAL: request: mp_malloc_sync 00:04:32.102 EAL: No shared files mode enabled, IPC is disabled 00:04:32.102 EAL: Heap on socket 0 was shrunk by 2MB 00:04:32.102 EAL: No shared files mode enabled, IPC is disabled 00:04:32.102 EAL: No shared files mode enabled, IPC is disabled 00:04:32.102 EAL: No shared files mode enabled, IPC is disabled 00:04:32.102 00:04:32.102 real 0m0.934s 00:04:32.102 user 0m0.480s 00:04:32.102 sys 0m0.323s 00:04:32.102 00:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.102 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 ************************************ 00:04:32.102 END TEST env_vtophys 00:04:32.102 ************************************ 00:04:32.102 00:16:47 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.102 00:16:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.102 00:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.102 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 ************************************ 00:04:32.102 START TEST env_pci 00:04:32.102 ************************************ 00:04:32.102 00:16:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:32.102 00:04:32.102 00:04:32.102 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.102 http://cunit.sourceforge.net/ 00:04:32.102 00:04:32.102 00:04:32.102 Suite: pci 00:04:32.102 Test: pci_hook ...[2024-09-29 00:16:47.898029] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53713 has claimed it 00:04:32.102 passed 00:04:32.102 00:04:32.102 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.102 suites 1 1 n/a 0 0 00:04:32.102 tests 1 1 1 0 0 00:04:32.102 asserts 25 25 25 0 n/a 00:04:32.102 00:04:32.102 Elapsed time = 0.002 seconds 00:04:32.102 EAL: Cannot find device (10000:00:01.0) 00:04:32.102 EAL: Failed to attach device on primary process 00:04:32.102 00:04:32.102 real 0m0.022s 00:04:32.102 user 0m0.011s 00:04:32.102 sys 0m0.010s 00:04:32.102 00:16:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.102 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 ************************************ 00:04:32.102 END TEST env_pci 00:04:32.102 ************************************ 00:04:32.102 00:16:47 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:32.102 00:16:47 -- env/env.sh@15 -- # uname 00:04:32.362 00:16:47 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:32.362 00:16:47 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:32.362 00:16:47 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.362 00:16:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:32.362 00:16:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.362 00:16:47 -- common/autotest_common.sh@10 -- # set +x 00:04:32.362 ************************************ 00:04:32.362 START TEST env_dpdk_post_init 00:04:32.362 ************************************ 00:04:32.362 00:16:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:32.362 EAL: Detected CPU lcores: 10 00:04:32.362 EAL: Detected NUMA nodes: 1 00:04:32.362 EAL: Detected shared linkage of DPDK 00:04:32.362 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.362 EAL: Selected IOVA mode 'PA' 00:04:32.362 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.362 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:32.362 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:32.362 Starting DPDK initialization... 00:04:32.362 Starting SPDK post initialization... 00:04:32.362 SPDK NVMe probe 00:04:32.362 Attaching to 0000:00:06.0 00:04:32.362 Attaching to 0000:00:07.0 00:04:32.362 Attached to 0000:00:06.0 00:04:32.362 Attached to 0000:00:07.0 00:04:32.362 Cleaning up... 00:04:32.362 00:04:32.362 real 0m0.169s 00:04:32.362 user 0m0.037s 00:04:32.362 sys 0m0.034s 00:04:32.362 00:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.362 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.362 ************************************ 00:04:32.362 END TEST env_dpdk_post_init 00:04:32.362 ************************************ 00:04:32.362 00:16:48 -- env/env.sh@26 -- # uname 00:04:32.362 00:16:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.362 00:16:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.362 00:16:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.362 00:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.362 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.362 ************************************ 00:04:32.362 START TEST env_mem_callbacks 00:04:32.362 ************************************ 00:04:32.362 00:16:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.362 EAL: Detected CPU lcores: 10 00:04:32.362 EAL: Detected NUMA nodes: 1 00:04:32.362 EAL: Detected shared linkage of DPDK 00:04:32.621 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.621 EAL: Selected IOVA mode 'PA' 00:04:32.621 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.621 00:04:32.621 00:04:32.621 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.621 http://cunit.sourceforge.net/ 00:04:32.621 00:04:32.621 00:04:32.621 Suite: memory 00:04:32.621 Test: test ... 00:04:32.621 register 0x200000200000 2097152 00:04:32.621 malloc 3145728 00:04:32.621 register 0x200000400000 4194304 00:04:32.621 buf 0x200000500000 len 3145728 PASSED 00:04:32.621 malloc 64 00:04:32.621 buf 0x2000004fff40 len 64 PASSED 00:04:32.621 malloc 4194304 00:04:32.621 register 0x200000800000 6291456 00:04:32.621 buf 0x200000a00000 len 4194304 PASSED 00:04:32.621 free 0x200000500000 3145728 00:04:32.621 free 0x2000004fff40 64 00:04:32.621 unregister 0x200000400000 4194304 PASSED 00:04:32.621 free 0x200000a00000 4194304 00:04:32.621 unregister 0x200000800000 6291456 PASSED 00:04:32.621 malloc 8388608 00:04:32.621 register 0x200000400000 10485760 00:04:32.621 buf 0x200000600000 len 8388608 PASSED 00:04:32.621 free 0x200000600000 8388608 00:04:32.621 unregister 0x200000400000 10485760 PASSED 00:04:32.621 passed 00:04:32.621 00:04:32.621 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.621 suites 1 1 n/a 0 0 00:04:32.621 tests 1 1 1 0 0 00:04:32.621 asserts 15 15 15 0 n/a 00:04:32.621 00:04:32.621 Elapsed time = 0.007 seconds 00:04:32.621 00:04:32.621 real 0m0.141s 00:04:32.621 user 0m0.021s 00:04:32.621 sys 0m0.019s 00:04:32.621 00:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.621 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.621 ************************************ 00:04:32.621 END TEST env_mem_callbacks 00:04:32.621 ************************************ 00:04:32.621 00:04:32.621 real 0m1.831s 00:04:32.621 user 0m0.889s 00:04:32.621 sys 0m0.595s 00:04:32.621 00:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.621 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.621 ************************************ 00:04:32.621 END TEST env 00:04:32.621 ************************************ 00:04:32.621 00:16:48 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.621 00:16:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.621 00:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.621 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.621 ************************************ 00:04:32.621 START TEST rpc 00:04:32.621 ************************************ 00:04:32.621 00:16:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.881 * Looking for test storage... 00:04:32.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.881 00:16:48 -- rpc/rpc.sh@65 -- # spdk_pid=53821 00:04:32.881 00:16:48 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.881 00:16:48 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.881 00:16:48 -- rpc/rpc.sh@67 -- # waitforlisten 53821 00:04:32.881 00:16:48 -- common/autotest_common.sh@819 -- # '[' -z 53821 ']' 00:04:32.881 00:16:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.881 00:16:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:32.881 00:16:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.881 00:16:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:32.881 00:16:48 -- common/autotest_common.sh@10 -- # set +x 00:04:32.881 [2024-09-29 00:16:48.576790] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:32.881 [2024-09-29 00:16:48.576949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53821 ] 00:04:32.881 [2024-09-29 00:16:48.721317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.141 [2024-09-29 00:16:48.775912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:33.141 [2024-09-29 00:16:48.776065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.141 [2024-09-29 00:16:48.776103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53821' to capture a snapshot of events at runtime. 00:04:33.141 [2024-09-29 00:16:48.776113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53821 for offline analysis/debug. 00:04:33.141 [2024-09-29 00:16:48.776137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.710 00:16:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.710 00:16:49 -- common/autotest_common.sh@852 -- # return 0 00:04:33.710 00:16:49 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.710 00:16:49 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:33.710 00:16:49 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.710 00:16:49 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.710 00:16:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.710 00:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.710 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.710 ************************************ 00:04:33.710 START TEST rpc_integrity 00:04:33.710 ************************************ 00:04:33.710 00:16:49 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:33.710 00:16:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.710 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.710 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.710 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.711 00:16:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.711 00:16:49 -- rpc/rpc.sh@13 -- # jq length 00:04:33.970 00:16:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.970 00:16:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.970 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.970 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.970 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.970 00:16:49 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.970 00:16:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.970 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.970 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.970 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.970 00:16:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.970 { 00:04:33.970 "name": "Malloc0", 00:04:33.970 "aliases": [ 00:04:33.970 "72a98cb1-690a-4a1e-99dc-19440d5a3857" 00:04:33.970 ], 00:04:33.970 "product_name": "Malloc disk", 00:04:33.970 "block_size": 512, 00:04:33.970 "num_blocks": 16384, 00:04:33.970 "uuid": "72a98cb1-690a-4a1e-99dc-19440d5a3857", 00:04:33.970 "assigned_rate_limits": { 00:04:33.970 "rw_ios_per_sec": 0, 00:04:33.970 "rw_mbytes_per_sec": 0, 00:04:33.970 "r_mbytes_per_sec": 0, 00:04:33.970 "w_mbytes_per_sec": 0 00:04:33.970 }, 00:04:33.970 "claimed": false, 00:04:33.970 "zoned": false, 00:04:33.970 "supported_io_types": { 00:04:33.970 "read": true, 00:04:33.970 "write": true, 00:04:33.970 "unmap": true, 00:04:33.970 "write_zeroes": true, 00:04:33.970 "flush": true, 00:04:33.970 "reset": true, 00:04:33.970 "compare": false, 00:04:33.970 "compare_and_write": false, 00:04:33.970 "abort": true, 00:04:33.970 "nvme_admin": false, 00:04:33.970 "nvme_io": false 00:04:33.970 }, 00:04:33.970 "memory_domains": [ 00:04:33.970 { 00:04:33.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.970 "dma_device_type": 2 00:04:33.970 } 00:04:33.970 ], 00:04:33.970 "driver_specific": {} 00:04:33.970 } 00:04:33.970 ]' 00:04:33.970 00:16:49 -- rpc/rpc.sh@17 -- # jq length 00:04:33.970 00:16:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.970 00:16:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.970 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.970 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.970 [2024-09-29 00:16:49.673419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.970 [2024-09-29 00:16:49.673660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.970 [2024-09-29 00:16:49.673689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ab4c0 00:04:33.970 [2024-09-29 00:16:49.673714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.970 [2024-09-29 00:16:49.675316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.970 [2024-09-29 00:16:49.675387] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.970 Passthru0 00:04:33.970 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.970 00:16:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.970 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.970 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.970 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.970 00:16:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.970 { 00:04:33.970 "name": "Malloc0", 00:04:33.970 "aliases": [ 00:04:33.970 "72a98cb1-690a-4a1e-99dc-19440d5a3857" 00:04:33.970 ], 00:04:33.970 "product_name": "Malloc disk", 00:04:33.970 "block_size": 512, 00:04:33.970 "num_blocks": 16384, 00:04:33.970 "uuid": "72a98cb1-690a-4a1e-99dc-19440d5a3857", 00:04:33.970 "assigned_rate_limits": { 00:04:33.970 "rw_ios_per_sec": 0, 00:04:33.970 "rw_mbytes_per_sec": 0, 00:04:33.970 "r_mbytes_per_sec": 0, 00:04:33.970 "w_mbytes_per_sec": 0 00:04:33.970 }, 00:04:33.970 "claimed": true, 00:04:33.970 "claim_type": "exclusive_write", 00:04:33.970 "zoned": false, 00:04:33.970 "supported_io_types": { 00:04:33.970 "read": true, 00:04:33.970 "write": true, 00:04:33.970 "unmap": true, 00:04:33.970 "write_zeroes": true, 00:04:33.970 "flush": true, 00:04:33.970 "reset": true, 00:04:33.970 "compare": false, 00:04:33.970 "compare_and_write": false, 00:04:33.970 "abort": true, 00:04:33.970 "nvme_admin": false, 00:04:33.970 "nvme_io": false 00:04:33.970 }, 00:04:33.970 "memory_domains": [ 00:04:33.970 { 00:04:33.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.970 "dma_device_type": 2 00:04:33.970 } 00:04:33.970 ], 00:04:33.970 "driver_specific": {} 00:04:33.970 }, 00:04:33.970 { 00:04:33.970 "name": "Passthru0", 00:04:33.970 "aliases": [ 00:04:33.970 "e8ed6466-fb97-598c-ab28-f10463417504" 00:04:33.970 ], 00:04:33.970 "product_name": "passthru", 00:04:33.970 "block_size": 512, 00:04:33.970 "num_blocks": 16384, 00:04:33.970 "uuid": "e8ed6466-fb97-598c-ab28-f10463417504", 00:04:33.970 "assigned_rate_limits": { 00:04:33.970 "rw_ios_per_sec": 0, 00:04:33.970 "rw_mbytes_per_sec": 0, 00:04:33.970 "r_mbytes_per_sec": 0, 00:04:33.970 "w_mbytes_per_sec": 0 00:04:33.970 }, 00:04:33.970 "claimed": false, 00:04:33.970 "zoned": false, 00:04:33.970 "supported_io_types": { 00:04:33.970 "read": true, 00:04:33.970 "write": true, 00:04:33.970 "unmap": true, 00:04:33.970 "write_zeroes": true, 00:04:33.970 "flush": true, 00:04:33.970 "reset": true, 00:04:33.970 "compare": false, 00:04:33.970 "compare_and_write": false, 00:04:33.970 "abort": true, 00:04:33.970 "nvme_admin": false, 00:04:33.970 "nvme_io": false 00:04:33.970 }, 00:04:33.971 "memory_domains": [ 00:04:33.971 { 00:04:33.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.971 "dma_device_type": 2 00:04:33.971 } 00:04:33.971 ], 00:04:33.971 "driver_specific": { 00:04:33.971 "passthru": { 00:04:33.971 "name": "Passthru0", 00:04:33.971 "base_bdev_name": "Malloc0" 00:04:33.971 } 00:04:33.971 } 00:04:33.971 } 00:04:33.971 ]' 00:04:33.971 00:16:49 -- rpc/rpc.sh@21 -- # jq length 00:04:33.971 00:16:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.971 00:16:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.971 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.971 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.971 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.971 00:16:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.971 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.971 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.971 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.971 00:16:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.971 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:33.971 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.971 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:33.971 00:16:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.971 00:16:49 -- rpc/rpc.sh@26 -- # jq length 00:04:34.230 ************************************ 00:04:34.230 END TEST rpc_integrity 00:04:34.230 ************************************ 00:04:34.230 00:16:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.230 00:04:34.230 real 0m0.300s 00:04:34.230 user 0m0.194s 00:04:34.230 sys 0m0.040s 00:04:34.230 00:16:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.230 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:34.230 00:16:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.230 00:16:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.230 00:16:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.230 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:34.230 ************************************ 00:04:34.230 START TEST rpc_plugins 00:04:34.230 ************************************ 00:04:34.230 00:16:49 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:34.230 00:16:49 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.230 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.230 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:34.230 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.230 00:16:49 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.230 00:16:49 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.230 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.230 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:34.230 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.230 00:16:49 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.230 { 00:04:34.230 "name": "Malloc1", 00:04:34.230 "aliases": [ 00:04:34.230 "84869c16-284c-4fcb-b13a-030775b8eb13" 00:04:34.230 ], 00:04:34.230 "product_name": "Malloc disk", 00:04:34.230 "block_size": 4096, 00:04:34.230 "num_blocks": 256, 00:04:34.230 "uuid": "84869c16-284c-4fcb-b13a-030775b8eb13", 00:04:34.230 "assigned_rate_limits": { 00:04:34.230 "rw_ios_per_sec": 0, 00:04:34.230 "rw_mbytes_per_sec": 0, 00:04:34.230 "r_mbytes_per_sec": 0, 00:04:34.230 "w_mbytes_per_sec": 0 00:04:34.230 }, 00:04:34.230 "claimed": false, 00:04:34.230 "zoned": false, 00:04:34.230 "supported_io_types": { 00:04:34.230 "read": true, 00:04:34.230 "write": true, 00:04:34.230 "unmap": true, 00:04:34.230 "write_zeroes": true, 00:04:34.230 "flush": true, 00:04:34.230 "reset": true, 00:04:34.230 "compare": false, 00:04:34.230 "compare_and_write": false, 00:04:34.230 "abort": true, 00:04:34.230 "nvme_admin": false, 00:04:34.230 "nvme_io": false 00:04:34.230 }, 00:04:34.230 "memory_domains": [ 00:04:34.230 { 00:04:34.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.230 "dma_device_type": 2 00:04:34.230 } 00:04:34.230 ], 00:04:34.230 "driver_specific": {} 00:04:34.230 } 00:04:34.230 ]' 00:04:34.230 00:16:49 -- rpc/rpc.sh@32 -- # jq length 00:04:34.230 00:16:49 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.230 00:16:49 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.231 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.231 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:34.231 00:16:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.231 00:16:49 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.231 00:16:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.231 00:16:49 -- common/autotest_common.sh@10 -- # set +x 00:04:34.231 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.231 00:16:50 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.231 00:16:50 -- rpc/rpc.sh@36 -- # jq length 00:04:34.231 ************************************ 00:04:34.231 END TEST rpc_plugins 00:04:34.231 ************************************ 00:04:34.231 00:16:50 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.231 00:04:34.231 real 0m0.157s 00:04:34.231 user 0m0.105s 00:04:34.231 sys 0m0.014s 00:04:34.231 00:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.231 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.490 00:16:50 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.490 00:16:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.490 00:16:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.490 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.490 ************************************ 00:04:34.490 START TEST rpc_trace_cmd_test 00:04:34.490 ************************************ 00:04:34.490 00:16:50 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:34.490 00:16:50 -- rpc/rpc.sh@40 -- # local info 00:04:34.490 00:16:50 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.490 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.490 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.490 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.490 00:16:50 -- rpc/rpc.sh@42 -- # info='{ 00:04:34.490 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53821", 00:04:34.490 "tpoint_group_mask": "0x8", 00:04:34.490 "iscsi_conn": { 00:04:34.490 "mask": "0x2", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "scsi": { 00:04:34.490 "mask": "0x4", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "bdev": { 00:04:34.490 "mask": "0x8", 00:04:34.490 "tpoint_mask": "0xffffffffffffffff" 00:04:34.490 }, 00:04:34.490 "nvmf_rdma": { 00:04:34.490 "mask": "0x10", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "nvmf_tcp": { 00:04:34.490 "mask": "0x20", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "ftl": { 00:04:34.490 "mask": "0x40", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "blobfs": { 00:04:34.490 "mask": "0x80", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "dsa": { 00:04:34.490 "mask": "0x200", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "thread": { 00:04:34.490 "mask": "0x400", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "nvme_pcie": { 00:04:34.490 "mask": "0x800", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "iaa": { 00:04:34.490 "mask": "0x1000", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "nvme_tcp": { 00:04:34.490 "mask": "0x2000", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 }, 00:04:34.490 "bdev_nvme": { 00:04:34.490 "mask": "0x4000", 00:04:34.490 "tpoint_mask": "0x0" 00:04:34.490 } 00:04:34.490 }' 00:04:34.490 00:16:50 -- rpc/rpc.sh@43 -- # jq length 00:04:34.490 00:16:50 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:34.490 00:16:50 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.490 00:16:50 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.490 00:16:50 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.490 00:16:50 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.490 00:16:50 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.490 00:16:50 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.490 00:16:50 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.762 ************************************ 00:04:34.762 END TEST rpc_trace_cmd_test 00:04:34.762 ************************************ 00:04:34.762 00:16:50 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.762 00:04:34.762 real 0m0.283s 00:04:34.762 user 0m0.243s 00:04:34.762 sys 0m0.028s 00:04:34.762 00:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.762 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.762 00:16:50 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.762 00:16:50 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.762 00:16:50 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.762 00:16:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.762 00:16:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.762 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.762 ************************************ 00:04:34.762 START TEST rpc_daemon_integrity 00:04:34.762 ************************************ 00:04:34.762 00:16:50 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:34.762 00:16:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.762 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.762 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.762 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.762 00:16:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.762 00:16:50 -- rpc/rpc.sh@13 -- # jq length 00:04:34.762 00:16:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.762 00:16:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.762 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.762 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.762 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.762 00:16:50 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.762 00:16:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.762 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.762 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.762 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.762 00:16:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.762 { 00:04:34.762 "name": "Malloc2", 00:04:34.762 "aliases": [ 00:04:34.762 "25a50b93-d14d-47b6-bc1e-da13286878e3" 00:04:34.762 ], 00:04:34.762 "product_name": "Malloc disk", 00:04:34.762 "block_size": 512, 00:04:34.762 "num_blocks": 16384, 00:04:34.762 "uuid": "25a50b93-d14d-47b6-bc1e-da13286878e3", 00:04:34.762 "assigned_rate_limits": { 00:04:34.762 "rw_ios_per_sec": 0, 00:04:34.762 "rw_mbytes_per_sec": 0, 00:04:34.762 "r_mbytes_per_sec": 0, 00:04:34.762 "w_mbytes_per_sec": 0 00:04:34.762 }, 00:04:34.762 "claimed": false, 00:04:34.762 "zoned": false, 00:04:34.763 "supported_io_types": { 00:04:34.763 "read": true, 00:04:34.763 "write": true, 00:04:34.763 "unmap": true, 00:04:34.763 "write_zeroes": true, 00:04:34.763 "flush": true, 00:04:34.763 "reset": true, 00:04:34.763 "compare": false, 00:04:34.763 "compare_and_write": false, 00:04:34.763 "abort": true, 00:04:34.763 "nvme_admin": false, 00:04:34.763 "nvme_io": false 00:04:34.763 }, 00:04:34.763 "memory_domains": [ 00:04:34.763 { 00:04:34.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.763 "dma_device_type": 2 00:04:34.763 } 00:04:34.763 ], 00:04:34.763 "driver_specific": {} 00:04:34.763 } 00:04:34.763 ]' 00:04:34.763 00:16:50 -- rpc/rpc.sh@17 -- # jq length 00:04:34.763 00:16:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.763 00:16:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.763 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.763 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:34.763 [2024-09-29 00:16:50.585791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.763 [2024-09-29 00:16:50.585850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.763 [2024-09-29 00:16:50.585869] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ab1c0 00:04:34.763 [2024-09-29 00:16:50.585877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.763 [2024-09-29 00:16:50.587122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.763 [2024-09-29 00:16:50.587151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.763 Passthru0 00:04:34.763 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.763 00:16:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.763 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.763 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.035 00:16:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.035 { 00:04:35.035 "name": "Malloc2", 00:04:35.035 "aliases": [ 00:04:35.035 "25a50b93-d14d-47b6-bc1e-da13286878e3" 00:04:35.035 ], 00:04:35.035 "product_name": "Malloc disk", 00:04:35.035 "block_size": 512, 00:04:35.035 "num_blocks": 16384, 00:04:35.035 "uuid": "25a50b93-d14d-47b6-bc1e-da13286878e3", 00:04:35.035 "assigned_rate_limits": { 00:04:35.035 "rw_ios_per_sec": 0, 00:04:35.035 "rw_mbytes_per_sec": 0, 00:04:35.035 "r_mbytes_per_sec": 0, 00:04:35.035 "w_mbytes_per_sec": 0 00:04:35.035 }, 00:04:35.035 "claimed": true, 00:04:35.035 "claim_type": "exclusive_write", 00:04:35.035 "zoned": false, 00:04:35.035 "supported_io_types": { 00:04:35.035 "read": true, 00:04:35.035 "write": true, 00:04:35.035 "unmap": true, 00:04:35.035 "write_zeroes": true, 00:04:35.035 "flush": true, 00:04:35.035 "reset": true, 00:04:35.035 "compare": false, 00:04:35.035 "compare_and_write": false, 00:04:35.035 "abort": true, 00:04:35.035 "nvme_admin": false, 00:04:35.035 "nvme_io": false 00:04:35.035 }, 00:04:35.035 "memory_domains": [ 00:04:35.035 { 00:04:35.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.035 "dma_device_type": 2 00:04:35.035 } 00:04:35.035 ], 00:04:35.035 "driver_specific": {} 00:04:35.035 }, 00:04:35.035 { 00:04:35.035 "name": "Passthru0", 00:04:35.035 "aliases": [ 00:04:35.035 "84ca798c-1846-57bd-be25-a49d9da8c04f" 00:04:35.035 ], 00:04:35.035 "product_name": "passthru", 00:04:35.035 "block_size": 512, 00:04:35.035 "num_blocks": 16384, 00:04:35.035 "uuid": "84ca798c-1846-57bd-be25-a49d9da8c04f", 00:04:35.035 "assigned_rate_limits": { 00:04:35.035 "rw_ios_per_sec": 0, 00:04:35.035 "rw_mbytes_per_sec": 0, 00:04:35.035 "r_mbytes_per_sec": 0, 00:04:35.035 "w_mbytes_per_sec": 0 00:04:35.035 }, 00:04:35.035 "claimed": false, 00:04:35.035 "zoned": false, 00:04:35.035 "supported_io_types": { 00:04:35.035 "read": true, 00:04:35.035 "write": true, 00:04:35.035 "unmap": true, 00:04:35.035 "write_zeroes": true, 00:04:35.035 "flush": true, 00:04:35.035 "reset": true, 00:04:35.035 "compare": false, 00:04:35.035 "compare_and_write": false, 00:04:35.035 "abort": true, 00:04:35.035 "nvme_admin": false, 00:04:35.035 "nvme_io": false 00:04:35.035 }, 00:04:35.035 "memory_domains": [ 00:04:35.035 { 00:04:35.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.035 "dma_device_type": 2 00:04:35.035 } 00:04:35.035 ], 00:04:35.035 "driver_specific": { 00:04:35.035 "passthru": { 00:04:35.035 "name": "Passthru0", 00:04:35.035 "base_bdev_name": "Malloc2" 00:04:35.035 } 00:04:35.035 } 00:04:35.035 } 00:04:35.035 ]' 00:04:35.035 00:16:50 -- rpc/rpc.sh@21 -- # jq length 00:04:35.035 00:16:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.035 00:16:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.035 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.035 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.035 00:16:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.035 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.035 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.035 00:16:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.035 00:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:35.035 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 00:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:35.035 00:16:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.035 00:16:50 -- rpc/rpc.sh@26 -- # jq length 00:04:35.035 ************************************ 00:04:35.035 END TEST rpc_daemon_integrity 00:04:35.035 ************************************ 00:04:35.035 00:16:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.035 00:04:35.035 real 0m0.314s 00:04:35.035 user 0m0.215s 00:04:35.035 sys 0m0.033s 00:04:35.035 00:16:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.035 00:16:50 -- common/autotest_common.sh@10 -- # set +x 00:04:35.035 00:16:50 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.035 00:16:50 -- rpc/rpc.sh@84 -- # killprocess 53821 00:04:35.035 00:16:50 -- common/autotest_common.sh@926 -- # '[' -z 53821 ']' 00:04:35.035 00:16:50 -- common/autotest_common.sh@930 -- # kill -0 53821 00:04:35.035 00:16:50 -- common/autotest_common.sh@931 -- # uname 00:04:35.035 00:16:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:35.035 00:16:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53821 00:04:35.035 killing process with pid 53821 00:04:35.035 00:16:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:35.035 00:16:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:35.035 00:16:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53821' 00:04:35.035 00:16:50 -- common/autotest_common.sh@945 -- # kill 53821 00:04:35.035 00:16:50 -- common/autotest_common.sh@950 -- # wait 53821 00:04:35.294 ************************************ 00:04:35.294 END TEST rpc 00:04:35.294 ************************************ 00:04:35.294 00:04:35.294 real 0m2.682s 00:04:35.294 user 0m3.630s 00:04:35.294 sys 0m0.558s 00:04:35.294 00:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.294 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.294 00:16:51 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.294 00:16:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.294 00:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.294 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.294 ************************************ 00:04:35.294 START TEST rpc_client 00:04:35.295 ************************************ 00:04:35.295 00:16:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:35.552 * Looking for test storage... 00:04:35.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:35.552 00:16:51 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:35.552 OK 00:04:35.552 00:16:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:35.552 00:04:35.552 real 0m0.103s 00:04:35.552 user 0m0.047s 00:04:35.552 sys 0m0.062s 00:04:35.552 ************************************ 00:04:35.552 END TEST rpc_client 00:04:35.552 ************************************ 00:04:35.552 00:16:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.552 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.552 00:16:51 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.552 00:16:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.552 00:16:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.552 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.552 ************************************ 00:04:35.552 START TEST json_config 00:04:35.552 ************************************ 00:04:35.552 00:16:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:35.552 00:16:51 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.552 00:16:51 -- nvmf/common.sh@7 -- # uname -s 00:04:35.552 00:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.552 00:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.552 00:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.552 00:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.552 00:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.552 00:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.552 00:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.552 00:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.552 00:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.552 00:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.552 00:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:04:35.552 00:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:04:35.552 00:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.552 00:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.552 00:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.552 00:16:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.552 00:16:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.552 00:16:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.552 00:16:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.552 00:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.552 00:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.552 00:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.552 00:16:51 -- paths/export.sh@5 -- # export PATH 00:04:35.552 00:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.552 00:16:51 -- nvmf/common.sh@46 -- # : 0 00:04:35.552 00:16:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:35.552 00:16:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:35.552 00:16:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:35.552 00:16:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.552 00:16:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.552 00:16:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:35.552 00:16:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:35.552 00:16:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:35.552 00:16:51 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:35.552 INFO: JSON configuration test init 00:04:35.552 00:16:51 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:35.552 00:16:51 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:35.552 00:16:51 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:35.552 00:16:51 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:35.552 00:16:51 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:35.553 00:16:51 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:35.553 00:16:51 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:35.553 00:16:51 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:35.553 00:16:51 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:35.553 00:16:51 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:35.553 00:16:51 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:35.553 00:16:51 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:35.553 00:16:51 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.553 00:16:51 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:35.553 00:16:51 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:35.553 00:16:51 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:35.553 00:16:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.553 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.553 00:16:51 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:35.553 00:16:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.553 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.553 00:16:51 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:35.553 00:16:51 -- json_config/json_config.sh@98 -- # local app=target 00:04:35.553 00:16:51 -- json_config/json_config.sh@99 -- # shift 00:04:35.553 00:16:51 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:35.553 00:16:51 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:35.553 00:16:51 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:35.553 00:16:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:35.553 00:16:51 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:35.553 00:16:51 -- json_config/json_config.sh@111 -- # app_pid[$app]=54058 00:04:35.553 00:16:51 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:35.553 00:16:51 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:35.553 Waiting for target to run... 00:04:35.553 00:16:51 -- json_config/json_config.sh@114 -- # waitforlisten 54058 /var/tmp/spdk_tgt.sock 00:04:35.553 00:16:51 -- common/autotest_common.sh@819 -- # '[' -z 54058 ']' 00:04:35.553 00:16:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.553 00:16:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:35.553 00:16:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.553 00:16:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:35.553 00:16:51 -- common/autotest_common.sh@10 -- # set +x 00:04:35.811 [2024-09-29 00:16:51.452127] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:35.811 [2024-09-29 00:16:51.452445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54058 ] 00:04:36.070 [2024-09-29 00:16:51.763863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.070 [2024-09-29 00:16:51.816373] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.070 [2024-09-29 00:16:51.816590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.638 00:16:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.638 00:16:52 -- common/autotest_common.sh@852 -- # return 0 00:04:36.638 00:04:36.638 00:16:52 -- json_config/json_config.sh@115 -- # echo '' 00:04:36.638 00:16:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:36.638 00:16:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:36.638 00:16:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:36.638 00:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.638 00:16:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:36.638 00:16:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:36.638 00:16:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:36.638 00:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.898 00:16:52 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:36.898 00:16:52 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:36.898 00:16:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:37.156 00:16:52 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:37.156 00:16:52 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:37.156 00:16:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:37.156 00:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:37.156 00:16:52 -- json_config/json_config.sh@48 -- # local ret=0 00:04:37.156 00:16:52 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:37.156 00:16:52 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:37.156 00:16:52 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:37.156 00:16:52 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:37.156 00:16:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:37.415 00:16:53 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:37.415 00:16:53 -- json_config/json_config.sh@51 -- # local get_types 00:04:37.415 00:16:53 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:37.415 00:16:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:37.415 00:16:53 -- common/autotest_common.sh@10 -- # set +x 00:04:37.415 00:16:53 -- json_config/json_config.sh@58 -- # return 0 00:04:37.415 00:16:53 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:37.415 00:16:53 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:37.415 00:16:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:37.415 00:16:53 -- common/autotest_common.sh@10 -- # set +x 00:04:37.415 00:16:53 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:37.415 00:16:53 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:37.415 00:16:53 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.415 00:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.673 MallocForNvmf0 00:04:37.673 00:16:53 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.673 00:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.931 MallocForNvmf1 00:04:37.931 00:16:53 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.931 00:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.190 [2024-09-29 00:16:53.864666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.190 00:16:53 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.190 00:16:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.449 00:16:54 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.449 00:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.708 00:16:54 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.708 00:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.967 00:16:54 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.967 00:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.967 [2024-09-29 00:16:54.813214] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:39.226 00:16:54 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:39.226 00:16:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:39.226 00:16:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.226 00:16:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:39.226 00:16:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:39.226 00:16:54 -- common/autotest_common.sh@10 -- # set +x 00:04:39.226 00:16:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:39.226 00:16:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.227 00:16:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.485 MallocBdevForConfigChangeCheck 00:04:39.485 00:16:55 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:39.485 00:16:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:39.485 00:16:55 -- common/autotest_common.sh@10 -- # set +x 00:04:39.485 00:16:55 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:39.485 00:16:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.745 INFO: shutting down applications... 00:04:39.745 00:16:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:39.745 00:16:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:39.745 00:16:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:39.745 00:16:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:39.745 00:16:55 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:40.004 Calling clear_iscsi_subsystem 00:04:40.004 Calling clear_nvmf_subsystem 00:04:40.004 Calling clear_nbd_subsystem 00:04:40.004 Calling clear_ublk_subsystem 00:04:40.004 Calling clear_vhost_blk_subsystem 00:04:40.004 Calling clear_vhost_scsi_subsystem 00:04:40.004 Calling clear_scheduler_subsystem 00:04:40.004 Calling clear_bdev_subsystem 00:04:40.004 Calling clear_accel_subsystem 00:04:40.004 Calling clear_vmd_subsystem 00:04:40.004 Calling clear_sock_subsystem 00:04:40.004 Calling clear_iobuf_subsystem 00:04:40.004 00:16:55 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:40.004 00:16:55 -- json_config/json_config.sh@396 -- # count=100 00:04:40.004 00:16:55 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:40.263 00:16:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.263 00:16:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:40.263 00:16:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:40.523 00:16:56 -- json_config/json_config.sh@398 -- # break 00:04:40.523 00:16:56 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:40.523 00:16:56 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:40.523 00:16:56 -- json_config/json_config.sh@120 -- # local app=target 00:04:40.523 00:16:56 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:40.523 00:16:56 -- json_config/json_config.sh@124 -- # [[ -n 54058 ]] 00:04:40.523 00:16:56 -- json_config/json_config.sh@127 -- # kill -SIGINT 54058 00:04:40.523 00:16:56 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:40.523 00:16:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:40.523 00:16:56 -- json_config/json_config.sh@130 -- # kill -0 54058 00:04:40.523 00:16:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:41.092 00:16:56 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:41.092 00:16:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:41.092 00:16:56 -- json_config/json_config.sh@130 -- # kill -0 54058 00:04:41.092 00:16:56 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:41.092 SPDK target shutdown done 00:04:41.092 00:16:56 -- json_config/json_config.sh@132 -- # break 00:04:41.092 00:16:56 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:41.092 00:16:56 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:41.092 INFO: relaunching applications... 00:04:41.092 00:16:56 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:41.092 00:16:56 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.092 00:16:56 -- json_config/json_config.sh@98 -- # local app=target 00:04:41.092 00:16:56 -- json_config/json_config.sh@99 -- # shift 00:04:41.092 00:16:56 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:41.092 00:16:56 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:41.092 00:16:56 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:41.092 00:16:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:41.092 00:16:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:41.092 Waiting for target to run... 00:04:41.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.092 00:16:56 -- json_config/json_config.sh@111 -- # app_pid[$app]=54249 00:04:41.092 00:16:56 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.092 00:16:56 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:41.092 00:16:56 -- json_config/json_config.sh@114 -- # waitforlisten 54249 /var/tmp/spdk_tgt.sock 00:04:41.092 00:16:56 -- common/autotest_common.sh@819 -- # '[' -z 54249 ']' 00:04:41.092 00:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.092 00:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.092 00:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.092 00:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.092 00:16:56 -- common/autotest_common.sh@10 -- # set +x 00:04:41.092 [2024-09-29 00:16:56.788799] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:41.092 [2024-09-29 00:16:56.789148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54249 ] 00:04:41.351 [2024-09-29 00:16:57.071223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.351 [2024-09-29 00:16:57.106028] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.351 [2024-09-29 00:16:57.106441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.610 [2024-09-29 00:16:57.400721] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.610 [2024-09-29 00:16:57.432783] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:41.869 00:16:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:41.869 00:16:57 -- common/autotest_common.sh@852 -- # return 0 00:04:41.869 00:04:41.869 INFO: Checking if target configuration is the same... 00:04:41.869 00:16:57 -- json_config/json_config.sh@115 -- # echo '' 00:04:41.869 00:16:57 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:41.869 00:16:57 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:41.869 00:16:57 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.869 00:16:57 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:41.869 00:16:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.869 + '[' 2 -ne 2 ']' 00:04:41.869 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:41.869 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:41.869 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:41.869 +++ basename /dev/fd/62 00:04:41.869 ++ mktemp /tmp/62.XXX 00:04:42.128 + tmp_file_1=/tmp/62.Bat 00:04:42.128 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.128 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.128 + tmp_file_2=/tmp/spdk_tgt_config.json.gpm 00:04:42.128 + ret=0 00:04:42.128 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:42.388 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:42.388 + diff -u /tmp/62.Bat /tmp/spdk_tgt_config.json.gpm 00:04:42.388 INFO: JSON config files are the same 00:04:42.388 + echo 'INFO: JSON config files are the same' 00:04:42.388 + rm /tmp/62.Bat /tmp/spdk_tgt_config.json.gpm 00:04:42.388 + exit 0 00:04:42.388 INFO: changing configuration and checking if this can be detected... 00:04:42.388 00:16:58 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:42.388 00:16:58 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:42.388 00:16:58 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.388 00:16:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.646 00:16:58 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:42.646 00:16:58 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.646 00:16:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.646 + '[' 2 -ne 2 ']' 00:04:42.646 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:42.647 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:42.647 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:42.647 +++ basename /dev/fd/62 00:04:42.647 ++ mktemp /tmp/62.XXX 00:04:42.647 + tmp_file_1=/tmp/62.XtT 00:04:42.647 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.647 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.647 + tmp_file_2=/tmp/spdk_tgt_config.json.1f9 00:04:42.647 + ret=0 00:04:42.647 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:43.215 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:43.215 + diff -u /tmp/62.XtT /tmp/spdk_tgt_config.json.1f9 00:04:43.215 + ret=1 00:04:43.215 + echo '=== Start of file: /tmp/62.XtT ===' 00:04:43.215 + cat /tmp/62.XtT 00:04:43.215 + echo '=== End of file: /tmp/62.XtT ===' 00:04:43.215 + echo '' 00:04:43.215 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1f9 ===' 00:04:43.215 + cat /tmp/spdk_tgt_config.json.1f9 00:04:43.215 + echo '=== End of file: /tmp/spdk_tgt_config.json.1f9 ===' 00:04:43.215 + echo '' 00:04:43.215 + rm /tmp/62.XtT /tmp/spdk_tgt_config.json.1f9 00:04:43.215 + exit 1 00:04:43.215 INFO: configuration change detected. 00:04:43.215 00:16:58 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:43.215 00:16:58 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:43.215 00:16:58 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:43.215 00:16:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.215 00:16:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.215 00:16:58 -- json_config/json_config.sh@360 -- # local ret=0 00:04:43.215 00:16:58 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:43.215 00:16:58 -- json_config/json_config.sh@370 -- # [[ -n 54249 ]] 00:04:43.215 00:16:58 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:43.215 00:16:58 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:43.215 00:16:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.215 00:16:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.215 00:16:58 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:43.215 00:16:58 -- json_config/json_config.sh@246 -- # uname -s 00:04:43.215 00:16:58 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:43.215 00:16:58 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:43.215 00:16:58 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:43.215 00:16:58 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:43.215 00:16:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:43.215 00:16:58 -- common/autotest_common.sh@10 -- # set +x 00:04:43.215 00:16:58 -- json_config/json_config.sh@376 -- # killprocess 54249 00:04:43.215 00:16:58 -- common/autotest_common.sh@926 -- # '[' -z 54249 ']' 00:04:43.215 00:16:58 -- common/autotest_common.sh@930 -- # kill -0 54249 00:04:43.215 00:16:58 -- common/autotest_common.sh@931 -- # uname 00:04:43.215 00:16:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:43.215 00:16:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54249 00:04:43.215 killing process with pid 54249 00:04:43.215 00:16:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:43.215 00:16:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:43.215 00:16:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54249' 00:04:43.215 00:16:58 -- common/autotest_common.sh@945 -- # kill 54249 00:04:43.215 00:16:58 -- common/autotest_common.sh@950 -- # wait 54249 00:04:43.474 00:16:59 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:43.474 00:16:59 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:43.474 00:16:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:43.474 00:16:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.474 INFO: Success 00:04:43.474 00:16:59 -- json_config/json_config.sh@381 -- # return 0 00:04:43.474 00:16:59 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:43.474 00:04:43.474 real 0m7.855s 00:04:43.474 user 0m11.402s 00:04:43.474 sys 0m1.318s 00:04:43.474 ************************************ 00:04:43.474 END TEST json_config 00:04:43.474 ************************************ 00:04:43.474 00:16:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.474 00:16:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.474 00:16:59 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.474 00:16:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.474 00:16:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.474 00:16:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.474 ************************************ 00:04:43.474 START TEST json_config_extra_key 00:04:43.474 ************************************ 00:04:43.474 00:16:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:43.474 00:16:59 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.474 00:16:59 -- nvmf/common.sh@7 -- # uname -s 00:04:43.474 00:16:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.474 00:16:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.474 00:16:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.474 00:16:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.475 00:16:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.475 00:16:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.475 00:16:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.475 00:16:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.475 00:16:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.475 00:16:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.475 00:16:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:04:43.475 00:16:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:04:43.475 00:16:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.475 00:16:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.475 00:16:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.475 00:16:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.475 00:16:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.475 00:16:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.475 00:16:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.475 00:16:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.475 00:16:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.475 00:16:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.475 00:16:59 -- paths/export.sh@5 -- # export PATH 00:04:43.475 00:16:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.475 00:16:59 -- nvmf/common.sh@46 -- # : 0 00:04:43.475 00:16:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:43.475 00:16:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:43.475 00:16:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:43.475 00:16:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.475 00:16:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.475 00:16:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:43.475 00:16:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:43.475 00:16:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.475 INFO: launching applications... 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:43.475 Waiting for target to run... 00:04:43.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54383 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54383 /var/tmp/spdk_tgt.sock 00:04:43.475 00:16:59 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:43.475 00:16:59 -- common/autotest_common.sh@819 -- # '[' -z 54383 ']' 00:04:43.475 00:16:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.475 00:16:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:43.475 00:16:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.475 00:16:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:43.475 00:16:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.733 [2024-09-29 00:16:59.331728] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:43.733 [2024-09-29 00:16:59.331803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54383 ] 00:04:43.993 [2024-09-29 00:16:59.601872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.993 [2024-09-29 00:16:59.641612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.993 [2024-09-29 00:16:59.641775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.560 00:04:44.560 INFO: shutting down applications... 00:04:44.560 00:17:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.560 00:17:00 -- common/autotest_common.sh@852 -- # return 0 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54383 ]] 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54383 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54383 00:04:44.560 00:17:00 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54383 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:45.128 SPDK target shutdown done 00:04:45.128 00:17:00 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:45.128 Success 00:04:45.128 00:04:45.128 real 0m1.641s 00:04:45.128 user 0m1.565s 00:04:45.128 sys 0m0.276s 00:04:45.128 00:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.128 ************************************ 00:04:45.128 END TEST json_config_extra_key 00:04:45.128 ************************************ 00:04:45.128 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.128 00:17:00 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.128 00:17:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.128 00:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.128 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.128 ************************************ 00:04:45.128 START TEST alias_rpc 00:04:45.128 ************************************ 00:04:45.128 00:17:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.128 * Looking for test storage... 00:04:45.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:45.128 00:17:00 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.128 00:17:00 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54452 00:04:45.128 00:17:00 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54452 00:04:45.128 00:17:00 -- common/autotest_common.sh@819 -- # '[' -z 54452 ']' 00:04:45.128 00:17:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.128 00:17:00 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.128 00:17:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.128 00:17:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.128 00:17:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.128 00:17:00 -- common/autotest_common.sh@10 -- # set +x 00:04:45.386 [2024-09-29 00:17:01.031368] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:45.386 [2024-09-29 00:17:01.031477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54452 ] 00:04:45.386 [2024-09-29 00:17:01.167776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.644 [2024-09-29 00:17:01.234942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:45.644 [2024-09-29 00:17:01.235130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.210 00:17:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:46.210 00:17:02 -- common/autotest_common.sh@852 -- # return 0 00:04:46.210 00:17:02 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:46.469 00:17:02 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54452 00:04:46.469 00:17:02 -- common/autotest_common.sh@926 -- # '[' -z 54452 ']' 00:04:46.469 00:17:02 -- common/autotest_common.sh@930 -- # kill -0 54452 00:04:46.469 00:17:02 -- common/autotest_common.sh@931 -- # uname 00:04:46.469 00:17:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:46.469 00:17:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54452 00:04:46.469 killing process with pid 54452 00:04:46.469 00:17:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:46.469 00:17:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:46.469 00:17:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54452' 00:04:46.469 00:17:02 -- common/autotest_common.sh@945 -- # kill 54452 00:04:46.469 00:17:02 -- common/autotest_common.sh@950 -- # wait 54452 00:04:46.727 ************************************ 00:04:46.727 END TEST alias_rpc 00:04:46.727 ************************************ 00:04:46.727 00:04:46.727 real 0m1.649s 00:04:46.727 user 0m2.011s 00:04:46.727 sys 0m0.292s 00:04:46.727 00:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.727 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.985 00:17:02 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:46.985 00:17:02 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.985 00:17:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.985 00:17:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.985 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.985 ************************************ 00:04:46.985 START TEST spdkcli_tcp 00:04:46.985 ************************************ 00:04:46.985 00:17:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.985 * Looking for test storage... 00:04:46.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:46.985 00:17:02 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:46.985 00:17:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:46.986 00:17:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:46.986 00:17:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.986 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54516 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:46.986 00:17:02 -- spdkcli/tcp.sh@27 -- # waitforlisten 54516 00:04:46.986 00:17:02 -- common/autotest_common.sh@819 -- # '[' -z 54516 ']' 00:04:46.986 00:17:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.986 00:17:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:46.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.986 00:17:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.986 00:17:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:46.986 00:17:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.986 [2024-09-29 00:17:02.746465] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:46.986 [2024-09-29 00:17:02.746573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54516 ] 00:04:47.244 [2024-09-29 00:17:02.884298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.244 [2024-09-29 00:17:02.933882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.244 [2024-09-29 00:17:02.934162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.244 [2024-09-29 00:17:02.934308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.178 00:17:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.178 00:17:03 -- common/autotest_common.sh@852 -- # return 0 00:04:48.178 00:17:03 -- spdkcli/tcp.sh@31 -- # socat_pid=54533 00:04:48.178 00:17:03 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.178 00:17:03 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.178 [ 00:04:48.178 "bdev_malloc_delete", 00:04:48.178 "bdev_malloc_create", 00:04:48.178 "bdev_null_resize", 00:04:48.178 "bdev_null_delete", 00:04:48.178 "bdev_null_create", 00:04:48.178 "bdev_nvme_cuse_unregister", 00:04:48.178 "bdev_nvme_cuse_register", 00:04:48.178 "bdev_opal_new_user", 00:04:48.178 "bdev_opal_set_lock_state", 00:04:48.178 "bdev_opal_delete", 00:04:48.178 "bdev_opal_get_info", 00:04:48.178 "bdev_opal_create", 00:04:48.178 "bdev_nvme_opal_revert", 00:04:48.178 "bdev_nvme_opal_init", 00:04:48.178 "bdev_nvme_send_cmd", 00:04:48.178 "bdev_nvme_get_path_iostat", 00:04:48.178 "bdev_nvme_get_mdns_discovery_info", 00:04:48.178 "bdev_nvme_stop_mdns_discovery", 00:04:48.178 "bdev_nvme_start_mdns_discovery", 00:04:48.178 "bdev_nvme_set_multipath_policy", 00:04:48.178 "bdev_nvme_set_preferred_path", 00:04:48.178 "bdev_nvme_get_io_paths", 00:04:48.178 "bdev_nvme_remove_error_injection", 00:04:48.178 "bdev_nvme_add_error_injection", 00:04:48.178 "bdev_nvme_get_discovery_info", 00:04:48.178 "bdev_nvme_stop_discovery", 00:04:48.178 "bdev_nvme_start_discovery", 00:04:48.178 "bdev_nvme_get_controller_health_info", 00:04:48.178 "bdev_nvme_disable_controller", 00:04:48.178 "bdev_nvme_enable_controller", 00:04:48.178 "bdev_nvme_reset_controller", 00:04:48.178 "bdev_nvme_get_transport_statistics", 00:04:48.178 "bdev_nvme_apply_firmware", 00:04:48.178 "bdev_nvme_detach_controller", 00:04:48.178 "bdev_nvme_get_controllers", 00:04:48.178 "bdev_nvme_attach_controller", 00:04:48.178 "bdev_nvme_set_hotplug", 00:04:48.179 "bdev_nvme_set_options", 00:04:48.179 "bdev_passthru_delete", 00:04:48.179 "bdev_passthru_create", 00:04:48.179 "bdev_lvol_grow_lvstore", 00:04:48.179 "bdev_lvol_get_lvols", 00:04:48.179 "bdev_lvol_get_lvstores", 00:04:48.179 "bdev_lvol_delete", 00:04:48.179 "bdev_lvol_set_read_only", 00:04:48.179 "bdev_lvol_resize", 00:04:48.179 "bdev_lvol_decouple_parent", 00:04:48.179 "bdev_lvol_inflate", 00:04:48.179 "bdev_lvol_rename", 00:04:48.179 "bdev_lvol_clone_bdev", 00:04:48.179 "bdev_lvol_clone", 00:04:48.179 "bdev_lvol_snapshot", 00:04:48.179 "bdev_lvol_create", 00:04:48.179 "bdev_lvol_delete_lvstore", 00:04:48.179 "bdev_lvol_rename_lvstore", 00:04:48.179 "bdev_lvol_create_lvstore", 00:04:48.179 "bdev_raid_set_options", 00:04:48.179 "bdev_raid_remove_base_bdev", 00:04:48.179 "bdev_raid_add_base_bdev", 00:04:48.179 "bdev_raid_delete", 00:04:48.179 "bdev_raid_create", 00:04:48.179 "bdev_raid_get_bdevs", 00:04:48.179 "bdev_error_inject_error", 00:04:48.179 "bdev_error_delete", 00:04:48.179 "bdev_error_create", 00:04:48.179 "bdev_split_delete", 00:04:48.179 "bdev_split_create", 00:04:48.179 "bdev_delay_delete", 00:04:48.179 "bdev_delay_create", 00:04:48.179 "bdev_delay_update_latency", 00:04:48.179 "bdev_zone_block_delete", 00:04:48.179 "bdev_zone_block_create", 00:04:48.179 "blobfs_create", 00:04:48.179 "blobfs_detect", 00:04:48.179 "blobfs_set_cache_size", 00:04:48.179 "bdev_aio_delete", 00:04:48.179 "bdev_aio_rescan", 00:04:48.179 "bdev_aio_create", 00:04:48.179 "bdev_ftl_set_property", 00:04:48.179 "bdev_ftl_get_properties", 00:04:48.179 "bdev_ftl_get_stats", 00:04:48.179 "bdev_ftl_unmap", 00:04:48.179 "bdev_ftl_unload", 00:04:48.179 "bdev_ftl_delete", 00:04:48.179 "bdev_ftl_load", 00:04:48.179 "bdev_ftl_create", 00:04:48.179 "bdev_virtio_attach_controller", 00:04:48.179 "bdev_virtio_scsi_get_devices", 00:04:48.179 "bdev_virtio_detach_controller", 00:04:48.179 "bdev_virtio_blk_set_hotplug", 00:04:48.179 "bdev_iscsi_delete", 00:04:48.179 "bdev_iscsi_create", 00:04:48.179 "bdev_iscsi_set_options", 00:04:48.179 "bdev_uring_delete", 00:04:48.179 "bdev_uring_create", 00:04:48.179 "accel_error_inject_error", 00:04:48.179 "ioat_scan_accel_module", 00:04:48.179 "dsa_scan_accel_module", 00:04:48.179 "iaa_scan_accel_module", 00:04:48.179 "vfu_virtio_create_scsi_endpoint", 00:04:48.179 "vfu_virtio_scsi_remove_target", 00:04:48.179 "vfu_virtio_scsi_add_target", 00:04:48.179 "vfu_virtio_create_blk_endpoint", 00:04:48.179 "vfu_virtio_delete_endpoint", 00:04:48.179 "iscsi_set_options", 00:04:48.179 "iscsi_get_auth_groups", 00:04:48.179 "iscsi_auth_group_remove_secret", 00:04:48.179 "iscsi_auth_group_add_secret", 00:04:48.179 "iscsi_delete_auth_group", 00:04:48.179 "iscsi_create_auth_group", 00:04:48.179 "iscsi_set_discovery_auth", 00:04:48.179 "iscsi_get_options", 00:04:48.179 "iscsi_target_node_request_logout", 00:04:48.179 "iscsi_target_node_set_redirect", 00:04:48.179 "iscsi_target_node_set_auth", 00:04:48.179 "iscsi_target_node_add_lun", 00:04:48.179 "iscsi_get_connections", 00:04:48.179 "iscsi_portal_group_set_auth", 00:04:48.179 "iscsi_start_portal_group", 00:04:48.179 "iscsi_delete_portal_group", 00:04:48.179 "iscsi_create_portal_group", 00:04:48.179 "iscsi_get_portal_groups", 00:04:48.179 "iscsi_delete_target_node", 00:04:48.179 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.179 "iscsi_target_node_add_pg_ig_maps", 00:04:48.179 "iscsi_create_target_node", 00:04:48.179 "iscsi_get_target_nodes", 00:04:48.179 "iscsi_delete_initiator_group", 00:04:48.179 "iscsi_initiator_group_remove_initiators", 00:04:48.179 "iscsi_initiator_group_add_initiators", 00:04:48.179 "iscsi_create_initiator_group", 00:04:48.179 "iscsi_get_initiator_groups", 00:04:48.179 "nvmf_set_crdt", 00:04:48.179 "nvmf_set_config", 00:04:48.179 "nvmf_set_max_subsystems", 00:04:48.179 "nvmf_subsystem_get_listeners", 00:04:48.179 "nvmf_subsystem_get_qpairs", 00:04:48.179 "nvmf_subsystem_get_controllers", 00:04:48.179 "nvmf_get_stats", 00:04:48.179 "nvmf_get_transports", 00:04:48.179 "nvmf_create_transport", 00:04:48.179 "nvmf_get_targets", 00:04:48.179 "nvmf_delete_target", 00:04:48.179 "nvmf_create_target", 00:04:48.179 "nvmf_subsystem_allow_any_host", 00:04:48.179 "nvmf_subsystem_remove_host", 00:04:48.179 "nvmf_subsystem_add_host", 00:04:48.179 "nvmf_subsystem_remove_ns", 00:04:48.179 "nvmf_subsystem_add_ns", 00:04:48.179 "nvmf_subsystem_listener_set_ana_state", 00:04:48.179 "nvmf_discovery_get_referrals", 00:04:48.179 "nvmf_discovery_remove_referral", 00:04:48.179 "nvmf_discovery_add_referral", 00:04:48.179 "nvmf_subsystem_remove_listener", 00:04:48.179 "nvmf_subsystem_add_listener", 00:04:48.179 "nvmf_delete_subsystem", 00:04:48.179 "nvmf_create_subsystem", 00:04:48.179 "nvmf_get_subsystems", 00:04:48.179 "env_dpdk_get_mem_stats", 00:04:48.179 "nbd_get_disks", 00:04:48.179 "nbd_stop_disk", 00:04:48.179 "nbd_start_disk", 00:04:48.179 "ublk_recover_disk", 00:04:48.179 "ublk_get_disks", 00:04:48.179 "ublk_stop_disk", 00:04:48.179 "ublk_start_disk", 00:04:48.179 "ublk_destroy_target", 00:04:48.179 "ublk_create_target", 00:04:48.179 "virtio_blk_create_transport", 00:04:48.179 "virtio_blk_get_transports", 00:04:48.179 "vhost_controller_set_coalescing", 00:04:48.179 "vhost_get_controllers", 00:04:48.179 "vhost_delete_controller", 00:04:48.179 "vhost_create_blk_controller", 00:04:48.179 "vhost_scsi_controller_remove_target", 00:04:48.179 "vhost_scsi_controller_add_target", 00:04:48.179 "vhost_start_scsi_controller", 00:04:48.179 "vhost_create_scsi_controller", 00:04:48.179 "thread_set_cpumask", 00:04:48.179 "framework_get_scheduler", 00:04:48.179 "framework_set_scheduler", 00:04:48.179 "framework_get_reactors", 00:04:48.179 "thread_get_io_channels", 00:04:48.179 "thread_get_pollers", 00:04:48.179 "thread_get_stats", 00:04:48.179 "framework_monitor_context_switch", 00:04:48.179 "spdk_kill_instance", 00:04:48.179 "log_enable_timestamps", 00:04:48.179 "log_get_flags", 00:04:48.179 "log_clear_flag", 00:04:48.179 "log_set_flag", 00:04:48.179 "log_get_level", 00:04:48.179 "log_set_level", 00:04:48.179 "log_get_print_level", 00:04:48.179 "log_set_print_level", 00:04:48.179 "framework_enable_cpumask_locks", 00:04:48.179 "framework_disable_cpumask_locks", 00:04:48.179 "framework_wait_init", 00:04:48.179 "framework_start_init", 00:04:48.179 "scsi_get_devices", 00:04:48.179 "bdev_get_histogram", 00:04:48.179 "bdev_enable_histogram", 00:04:48.179 "bdev_set_qos_limit", 00:04:48.179 "bdev_set_qd_sampling_period", 00:04:48.179 "bdev_get_bdevs", 00:04:48.179 "bdev_reset_iostat", 00:04:48.179 "bdev_get_iostat", 00:04:48.179 "bdev_examine", 00:04:48.179 "bdev_wait_for_examine", 00:04:48.179 "bdev_set_options", 00:04:48.179 "notify_get_notifications", 00:04:48.179 "notify_get_types", 00:04:48.179 "accel_get_stats", 00:04:48.179 "accel_set_options", 00:04:48.179 "accel_set_driver", 00:04:48.179 "accel_crypto_key_destroy", 00:04:48.179 "accel_crypto_keys_get", 00:04:48.179 "accel_crypto_key_create", 00:04:48.179 "accel_assign_opc", 00:04:48.179 "accel_get_module_info", 00:04:48.179 "accel_get_opc_assignments", 00:04:48.179 "vmd_rescan", 00:04:48.179 "vmd_remove_device", 00:04:48.179 "vmd_enable", 00:04:48.179 "sock_set_default_impl", 00:04:48.179 "sock_impl_set_options", 00:04:48.179 "sock_impl_get_options", 00:04:48.179 "iobuf_get_stats", 00:04:48.179 "iobuf_set_options", 00:04:48.179 "framework_get_pci_devices", 00:04:48.179 "framework_get_config", 00:04:48.179 "framework_get_subsystems", 00:04:48.179 "vfu_tgt_set_base_path", 00:04:48.179 "trace_get_info", 00:04:48.179 "trace_get_tpoint_group_mask", 00:04:48.179 "trace_disable_tpoint_group", 00:04:48.179 "trace_enable_tpoint_group", 00:04:48.179 "trace_clear_tpoint_mask", 00:04:48.179 "trace_set_tpoint_mask", 00:04:48.179 "spdk_get_version", 00:04:48.179 "rpc_get_methods" 00:04:48.179 ] 00:04:48.179 00:17:03 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.179 00:17:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.179 00:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:48.179 00:17:03 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.179 00:17:03 -- spdkcli/tcp.sh@38 -- # killprocess 54516 00:04:48.179 00:17:03 -- common/autotest_common.sh@926 -- # '[' -z 54516 ']' 00:04:48.179 00:17:03 -- common/autotest_common.sh@930 -- # kill -0 54516 00:04:48.179 00:17:03 -- common/autotest_common.sh@931 -- # uname 00:04:48.179 00:17:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:48.179 00:17:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54516 00:04:48.179 00:17:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:48.179 00:17:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:48.179 00:17:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54516' 00:04:48.179 killing process with pid 54516 00:04:48.179 00:17:04 -- common/autotest_common.sh@945 -- # kill 54516 00:04:48.179 00:17:04 -- common/autotest_common.sh@950 -- # wait 54516 00:04:48.438 ************************************ 00:04:48.438 END TEST spdkcli_tcp 00:04:48.438 ************************************ 00:04:48.438 00:04:48.438 real 0m1.666s 00:04:48.438 user 0m3.260s 00:04:48.438 sys 0m0.326s 00:04:48.438 00:17:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.438 00:17:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.697 00:17:04 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.697 00:17:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.697 00:17:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.697 00:17:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.697 ************************************ 00:04:48.697 START TEST dpdk_mem_utility 00:04:48.697 ************************************ 00:04:48.697 00:17:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:48.697 * Looking for test storage... 00:04:48.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:48.697 00:17:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:48.697 00:17:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54606 00:04:48.697 00:17:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.697 00:17:04 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54606 00:04:48.697 00:17:04 -- common/autotest_common.sh@819 -- # '[' -z 54606 ']' 00:04:48.697 00:17:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.697 00:17:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.697 00:17:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.697 00:17:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.697 00:17:04 -- common/autotest_common.sh@10 -- # set +x 00:04:48.697 [2024-09-29 00:17:04.437799] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:48.697 [2024-09-29 00:17:04.437906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54606 ] 00:04:48.956 [2024-09-29 00:17:04.570052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.956 [2024-09-29 00:17:04.619869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.956 [2024-09-29 00:17:04.620065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.895 00:17:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.895 00:17:05 -- common/autotest_common.sh@852 -- # return 0 00:04:49.895 00:17:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:49.895 00:17:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:49.895 00:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:49.895 00:17:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.895 { 00:04:49.895 "filename": "/tmp/spdk_mem_dump.txt" 00:04:49.895 } 00:04:49.895 00:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.895 00:17:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:49.895 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:49.895 1 heaps totaling size 814.000000 MiB 00:04:49.895 size: 814.000000 MiB heap id: 0 00:04:49.896 end heaps---------- 00:04:49.896 8 mempools totaling size 598.116089 MiB 00:04:49.896 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:49.896 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:49.896 size: 84.521057 MiB name: bdev_io_54606 00:04:49.896 size: 51.011292 MiB name: evtpool_54606 00:04:49.896 size: 50.003479 MiB name: msgpool_54606 00:04:49.896 size: 21.763794 MiB name: PDU_Pool 00:04:49.896 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:49.896 size: 0.026123 MiB name: Session_Pool 00:04:49.896 end mempools------- 00:04:49.896 6 memzones totaling size 4.142822 MiB 00:04:49.896 size: 1.000366 MiB name: RG_ring_0_54606 00:04:49.896 size: 1.000366 MiB name: RG_ring_1_54606 00:04:49.896 size: 1.000366 MiB name: RG_ring_4_54606 00:04:49.896 size: 1.000366 MiB name: RG_ring_5_54606 00:04:49.896 size: 0.125366 MiB name: RG_ring_2_54606 00:04:49.896 size: 0.015991 MiB name: RG_ring_3_54606 00:04:49.896 end memzones------- 00:04:49.896 00:17:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:49.896 heap id: 0 total size: 814.000000 MiB number of busy elements: 305 number of free elements: 15 00:04:49.896 list of free elements. size: 12.471008 MiB 00:04:49.896 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:49.896 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:49.896 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:49.896 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:49.896 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:49.896 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:49.896 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:49.896 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:49.896 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:49.896 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:04:49.896 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:49.896 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:49.896 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:49.896 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:49.896 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:49.896 list of standard malloc elements. size: 199.266418 MiB 00:04:49.896 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:49.896 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:49.896 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:49.896 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:49.896 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:49.896 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:49.896 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:49.896 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:49.896 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:49.896 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:49.896 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:49.897 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:49.897 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:49.898 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:49.898 list of memzone associated elements. size: 602.262573 MiB 00:04:49.898 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:49.898 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:49.898 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:49.898 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:49.898 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:49.898 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54606_0 00:04:49.898 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:49.898 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54606_0 00:04:49.898 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:49.898 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54606_0 00:04:49.898 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:49.898 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:49.898 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:49.898 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:49.898 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:49.898 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54606 00:04:49.898 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:49.898 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54606 00:04:49.898 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:49.898 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54606 00:04:49.898 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:49.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:49.898 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:49.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:49.898 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:49.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:49.898 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:49.898 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:49.898 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:49.898 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54606 00:04:49.898 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:49.898 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54606 00:04:49.898 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:49.898 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54606 00:04:49.898 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:49.898 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54606 00:04:49.898 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:49.898 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54606 00:04:49.898 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:49.898 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:49.898 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:49.898 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:49.898 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:49.898 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:49.898 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:49.898 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54606 00:04:49.898 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:49.898 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:49.898 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:49.898 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:49.898 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:49.898 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54606 00:04:49.898 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:49.898 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:49.898 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:49.898 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54606 00:04:49.898 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:49.898 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54606 00:04:49.898 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:49.898 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:49.898 00:17:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:49.898 00:17:05 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54606 00:04:49.898 00:17:05 -- common/autotest_common.sh@926 -- # '[' -z 54606 ']' 00:04:49.898 00:17:05 -- common/autotest_common.sh@930 -- # kill -0 54606 00:04:49.898 00:17:05 -- common/autotest_common.sh@931 -- # uname 00:04:49.898 00:17:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.898 00:17:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54606 00:04:49.898 00:17:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:49.898 killing process with pid 54606 00:04:49.898 00:17:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:49.898 00:17:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54606' 00:04:49.898 00:17:05 -- common/autotest_common.sh@945 -- # kill 54606 00:04:49.898 00:17:05 -- common/autotest_common.sh@950 -- # wait 54606 00:04:50.159 00:04:50.159 real 0m1.550s 00:04:50.159 user 0m1.825s 00:04:50.159 sys 0m0.305s 00:04:50.159 00:17:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.159 ************************************ 00:04:50.159 END TEST dpdk_mem_utility 00:04:50.159 ************************************ 00:04:50.159 00:17:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.159 00:17:05 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:50.159 00:17:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.159 00:17:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.159 00:17:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.159 ************************************ 00:04:50.159 START TEST event 00:04:50.159 ************************************ 00:04:50.159 00:17:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:50.159 * Looking for test storage... 00:04:50.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:50.159 00:17:05 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:50.159 00:17:05 -- bdev/nbd_common.sh@6 -- # set -e 00:04:50.159 00:17:05 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.159 00:17:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:50.159 00:17:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.159 00:17:05 -- common/autotest_common.sh@10 -- # set +x 00:04:50.159 ************************************ 00:04:50.159 START TEST event_perf 00:04:50.159 ************************************ 00:04:50.159 00:17:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:50.418 Running I/O for 1 seconds...[2024-09-29 00:17:06.018237] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:50.418 [2024-09-29 00:17:06.018304] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54682 ] 00:04:50.418 [2024-09-29 00:17:06.150932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:50.418 [2024-09-29 00:17:06.201852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.418 [2024-09-29 00:17:06.201970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.418 [2024-09-29 00:17:06.202013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.418 [2024-09-29 00:17:06.202018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.795 Running I/O for 1 seconds... 00:04:51.795 lcore 0: 205804 00:04:51.795 lcore 1: 205801 00:04:51.795 lcore 2: 205803 00:04:51.795 lcore 3: 205803 00:04:51.795 done. 00:04:51.795 00:04:51.795 real 0m1.283s 00:04:51.795 user 0m4.124s 00:04:51.795 sys 0m0.039s 00:04:51.795 00:17:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.795 00:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.795 ************************************ 00:04:51.795 END TEST event_perf 00:04:51.795 ************************************ 00:04:51.795 00:17:07 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.795 00:17:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:51.795 00:17:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.795 00:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:51.795 ************************************ 00:04:51.795 START TEST event_reactor 00:04:51.795 ************************************ 00:04:51.795 00:17:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:51.795 [2024-09-29 00:17:07.351673] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:51.795 [2024-09-29 00:17:07.351801] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54715 ] 00:04:51.795 [2024-09-29 00:17:07.484329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.795 [2024-09-29 00:17:07.530400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.777 test_start 00:04:52.777 oneshot 00:04:52.777 tick 100 00:04:52.777 tick 100 00:04:52.777 tick 250 00:04:52.777 tick 100 00:04:52.777 tick 100 00:04:52.777 tick 100 00:04:52.777 tick 250 00:04:52.777 tick 500 00:04:52.777 tick 100 00:04:52.777 tick 100 00:04:52.777 tick 250 00:04:52.777 tick 100 00:04:52.777 tick 100 00:04:52.777 test_end 00:04:52.777 00:04:52.777 real 0m1.272s 00:04:52.777 user 0m1.133s 00:04:52.777 sys 0m0.034s 00:04:52.777 00:17:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.777 ************************************ 00:04:52.777 END TEST event_reactor 00:04:52.777 ************************************ 00:04:52.777 00:17:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.036 00:17:08 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.036 00:17:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:53.036 00:17:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.036 00:17:08 -- common/autotest_common.sh@10 -- # set +x 00:04:53.036 ************************************ 00:04:53.036 START TEST event_reactor_perf 00:04:53.036 ************************************ 00:04:53.037 00:17:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:53.037 [2024-09-29 00:17:08.680662] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:53.037 [2024-09-29 00:17:08.680760] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54745 ] 00:04:53.037 [2024-09-29 00:17:08.815586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.037 [2024-09-29 00:17:08.862914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.414 test_start 00:04:54.414 test_end 00:04:54.414 Performance: 448381 events per second 00:04:54.414 00:04:54.414 real 0m1.282s 00:04:54.414 user 0m1.137s 00:04:54.414 sys 0m0.039s 00:04:54.414 00:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.414 00:17:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.415 ************************************ 00:04:54.415 END TEST event_reactor_perf 00:04:54.415 ************************************ 00:04:54.415 00:17:09 -- event/event.sh@49 -- # uname -s 00:04:54.415 00:17:09 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:54.415 00:17:09 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:54.415 00:17:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.415 00:17:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.415 00:17:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.415 ************************************ 00:04:54.415 START TEST event_scheduler 00:04:54.415 ************************************ 00:04:54.415 00:17:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:54.415 * Looking for test storage... 00:04:54.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:54.415 00:17:10 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:54.415 00:17:10 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54811 00:04:54.415 00:17:10 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.415 00:17:10 -- scheduler/scheduler.sh@37 -- # waitforlisten 54811 00:04:54.415 00:17:10 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:54.415 00:17:10 -- common/autotest_common.sh@819 -- # '[' -z 54811 ']' 00:04:54.415 00:17:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.415 00:17:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.415 00:17:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.415 00:17:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.415 00:17:10 -- common/autotest_common.sh@10 -- # set +x 00:04:54.415 [2024-09-29 00:17:10.125224] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:54.415 [2024-09-29 00:17:10.126046] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54811 ] 00:04:54.673 [2024-09-29 00:17:10.267260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.673 [2024-09-29 00:17:10.338708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.673 [2024-09-29 00:17:10.338838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.673 [2024-09-29 00:17:10.338871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.674 [2024-09-29 00:17:10.338876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.611 00:17:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.611 00:17:11 -- common/autotest_common.sh@852 -- # return 0 00:04:55.611 00:17:11 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:55.611 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.611 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.611 POWER: Env isn't set yet! 00:04:55.611 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:55.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.611 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.611 POWER: Attempting to initialise PSTAT power management... 00:04:55.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.611 POWER: Cannot set governor of lcore 0 to performance 00:04:55.611 POWER: Attempting to initialise AMD PSTATE power management... 00:04:55.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.611 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.611 POWER: Attempting to initialise CPPC power management... 00:04:55.611 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:55.611 POWER: Cannot set governor of lcore 0 to userspace 00:04:55.611 POWER: Attempting to initialise VM power management... 00:04:55.611 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:55.611 POWER: Unable to set Power Management Environment for lcore 0 00:04:55.611 [2024-09-29 00:17:11.124797] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:55.611 [2024-09-29 00:17:11.124810] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:55.611 [2024-09-29 00:17:11.124819] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:55.611 [2024-09-29 00:17:11.124831] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:55.611 [2024-09-29 00:17:11.124838] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:55.611 [2024-09-29 00:17:11.124844] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:55.611 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.611 00:17:11 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:55.611 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.611 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.611 [2024-09-29 00:17:11.174841] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:55.611 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.611 00:17:11 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:55.611 00:17:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.611 00:17:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.611 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.611 ************************************ 00:04:55.611 START TEST scheduler_create_thread 00:04:55.612 ************************************ 00:04:55.612 00:17:11 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 2 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 3 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 4 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 5 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 6 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 7 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 8 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 9 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 10 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.612 00:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.612 00:17:11 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:55.612 00:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.612 00:17:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.020 00:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:57.020 00:17:12 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:57.020 00:17:12 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:57.020 00:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:57.020 00:17:12 -- common/autotest_common.sh@10 -- # set +x 00:04:57.956 ************************************ 00:04:57.956 END TEST scheduler_create_thread 00:04:57.956 ************************************ 00:04:57.956 00:17:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:57.956 00:04:57.956 real 0m2.612s 00:04:57.956 user 0m0.016s 00:04:57.956 sys 0m0.004s 00:04:57.956 00:17:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.956 00:17:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.215 00:17:13 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:58.215 00:17:13 -- scheduler/scheduler.sh@46 -- # killprocess 54811 00:04:58.215 00:17:13 -- common/autotest_common.sh@926 -- # '[' -z 54811 ']' 00:04:58.215 00:17:13 -- common/autotest_common.sh@930 -- # kill -0 54811 00:04:58.215 00:17:13 -- common/autotest_common.sh@931 -- # uname 00:04:58.215 00:17:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.215 00:17:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54811 00:04:58.215 killing process with pid 54811 00:04:58.215 00:17:13 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:58.215 00:17:13 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:58.215 00:17:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54811' 00:04:58.215 00:17:13 -- common/autotest_common.sh@945 -- # kill 54811 00:04:58.215 00:17:13 -- common/autotest_common.sh@950 -- # wait 54811 00:04:58.474 [2024-09-29 00:17:14.278549] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:58.733 ************************************ 00:04:58.733 END TEST event_scheduler 00:04:58.733 ************************************ 00:04:58.733 00:04:58.733 real 0m4.466s 00:04:58.733 user 0m8.657s 00:04:58.733 sys 0m0.309s 00:04:58.733 00:17:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.733 00:17:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.733 00:17:14 -- event/event.sh@51 -- # modprobe -n nbd 00:04:58.733 00:17:14 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:58.733 00:17:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.733 00:17:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.733 00:17:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.733 ************************************ 00:04:58.733 START TEST app_repeat 00:04:58.733 ************************************ 00:04:58.733 00:17:14 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:58.733 00:17:14 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.733 00:17:14 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.733 00:17:14 -- event/event.sh@13 -- # local nbd_list 00:04:58.733 00:17:14 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.733 00:17:14 -- event/event.sh@14 -- # local bdev_list 00:04:58.733 00:17:14 -- event/event.sh@15 -- # local repeat_times=4 00:04:58.733 00:17:14 -- event/event.sh@17 -- # modprobe nbd 00:04:58.733 Process app_repeat pid: 54905 00:04:58.733 spdk_app_start Round 0 00:04:58.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.733 00:17:14 -- event/event.sh@19 -- # repeat_pid=54905 00:04:58.733 00:17:14 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.733 00:17:14 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54905' 00:04:58.733 00:17:14 -- event/event.sh@23 -- # for i in {0..2} 00:04:58.733 00:17:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:58.733 00:17:14 -- event/event.sh@25 -- # waitforlisten 54905 /var/tmp/spdk-nbd.sock 00:04:58.733 00:17:14 -- common/autotest_common.sh@819 -- # '[' -z 54905 ']' 00:04:58.733 00:17:14 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:58.733 00:17:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.733 00:17:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.733 00:17:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.734 00:17:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.734 00:17:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.734 [2024-09-29 00:17:14.543895] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:04:58.734 [2024-09-29 00:17:14.543984] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54905 ] 00:04:58.992 [2024-09-29 00:17:14.680067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.992 [2024-09-29 00:17:14.729009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.992 [2024-09-29 00:17:14.729016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.992 00:17:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.992 00:17:14 -- common/autotest_common.sh@852 -- # return 0 00:04:58.992 00:17:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.251 Malloc0 00:04:59.510 00:17:15 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.510 Malloc1 00:04:59.510 00:17:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@12 -- # local i 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.510 00:17:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.769 /dev/nbd0 00:04:59.769 00:17:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.769 00:17:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.769 00:17:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:59.769 00:17:15 -- common/autotest_common.sh@857 -- # local i 00:04:59.769 00:17:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:59.769 00:17:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:59.769 00:17:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:59.769 00:17:15 -- common/autotest_common.sh@861 -- # break 00:04:59.769 00:17:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:59.769 00:17:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:59.769 00:17:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.769 1+0 records in 00:04:59.769 1+0 records out 00:04:59.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361871 s, 11.3 MB/s 00:04:59.769 00:17:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.769 00:17:15 -- common/autotest_common.sh@874 -- # size=4096 00:04:59.769 00:17:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.769 00:17:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:59.769 00:17:15 -- common/autotest_common.sh@877 -- # return 0 00:04:59.769 00:17:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.769 00:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.769 00:17:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.029 /dev/nbd1 00:05:00.287 00:17:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.287 00:17:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.287 00:17:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:00.287 00:17:15 -- common/autotest_common.sh@857 -- # local i 00:05:00.287 00:17:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:00.287 00:17:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:00.287 00:17:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:00.287 00:17:15 -- common/autotest_common.sh@861 -- # break 00:05:00.287 00:17:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:00.287 00:17:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:00.287 00:17:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.287 1+0 records in 00:05:00.287 1+0 records out 00:05:00.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593573 s, 6.9 MB/s 00:05:00.287 00:17:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.287 00:17:15 -- common/autotest_common.sh@874 -- # size=4096 00:05:00.287 00:17:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.287 00:17:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:00.287 00:17:15 -- common/autotest_common.sh@877 -- # return 0 00:05:00.287 00:17:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.288 00:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.288 00:17:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.288 00:17:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.288 00:17:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.547 { 00:05:00.547 "nbd_device": "/dev/nbd0", 00:05:00.547 "bdev_name": "Malloc0" 00:05:00.547 }, 00:05:00.547 { 00:05:00.547 "nbd_device": "/dev/nbd1", 00:05:00.547 "bdev_name": "Malloc1" 00:05:00.547 } 00:05:00.547 ]' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.547 { 00:05:00.547 "nbd_device": "/dev/nbd0", 00:05:00.547 "bdev_name": "Malloc0" 00:05:00.547 }, 00:05:00.547 { 00:05:00.547 "nbd_device": "/dev/nbd1", 00:05:00.547 "bdev_name": "Malloc1" 00:05:00.547 } 00:05:00.547 ]' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.547 /dev/nbd1' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.547 /dev/nbd1' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.547 256+0 records in 00:05:00.547 256+0 records out 00:05:00.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079116 s, 133 MB/s 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.547 256+0 records in 00:05:00.547 256+0 records out 00:05:00.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249302 s, 42.1 MB/s 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.547 256+0 records in 00:05:00.547 256+0 records out 00:05:00.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315143 s, 33.3 MB/s 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@51 -- # local i 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.547 00:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@41 -- # break 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.806 00:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@41 -- # break 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.065 00:17:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.324 00:17:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.324 00:17:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.324 00:17:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@65 -- # true 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.583 00:17:17 -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.583 00:17:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.843 00:17:17 -- event/event.sh@35 -- # sleep 3 00:05:01.843 [2024-09-29 00:17:17.611818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.843 [2024-09-29 00:17:17.657940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.843 [2024-09-29 00:17:17.657950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.843 [2024-09-29 00:17:17.685207] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.843 [2024-09-29 00:17:17.685279] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.130 spdk_app_start Round 1 00:05:05.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.130 00:17:20 -- event/event.sh@23 -- # for i in {0..2} 00:05:05.130 00:17:20 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:05.130 00:17:20 -- event/event.sh@25 -- # waitforlisten 54905 /var/tmp/spdk-nbd.sock 00:05:05.130 00:17:20 -- common/autotest_common.sh@819 -- # '[' -z 54905 ']' 00:05:05.130 00:17:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.130 00:17:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.130 00:17:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.130 00:17:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.130 00:17:20 -- common/autotest_common.sh@10 -- # set +x 00:05:05.130 00:17:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:05.130 00:17:20 -- common/autotest_common.sh@852 -- # return 0 00:05:05.130 00:17:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.389 Malloc0 00:05:05.389 00:17:21 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.649 Malloc1 00:05:05.649 00:17:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@12 -- # local i 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.649 00:17:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.649 /dev/nbd0 00:05:05.908 00:17:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.908 00:17:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.908 00:17:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:05.908 00:17:21 -- common/autotest_common.sh@857 -- # local i 00:05:05.908 00:17:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:05.908 00:17:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:05.908 00:17:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:05.908 00:17:21 -- common/autotest_common.sh@861 -- # break 00:05:05.908 00:17:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:05.908 00:17:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:05.908 00:17:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.908 1+0 records in 00:05:05.908 1+0 records out 00:05:05.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211658 s, 19.4 MB/s 00:05:05.908 00:17:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.908 00:17:21 -- common/autotest_common.sh@874 -- # size=4096 00:05:05.908 00:17:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.908 00:17:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:05.908 00:17:21 -- common/autotest_common.sh@877 -- # return 0 00:05:05.908 00:17:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.908 00:17:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.908 00:17:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.167 /dev/nbd1 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.167 00:17:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:06.167 00:17:21 -- common/autotest_common.sh@857 -- # local i 00:05:06.167 00:17:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:06.167 00:17:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:06.167 00:17:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:06.167 00:17:21 -- common/autotest_common.sh@861 -- # break 00:05:06.167 00:17:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:06.167 00:17:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:06.167 00:17:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.167 1+0 records in 00:05:06.167 1+0 records out 00:05:06.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269242 s, 15.2 MB/s 00:05:06.167 00:17:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.167 00:17:21 -- common/autotest_common.sh@874 -- # size=4096 00:05:06.167 00:17:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.167 00:17:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:06.167 00:17:21 -- common/autotest_common.sh@877 -- # return 0 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.167 00:17:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.428 { 00:05:06.428 "nbd_device": "/dev/nbd0", 00:05:06.428 "bdev_name": "Malloc0" 00:05:06.428 }, 00:05:06.428 { 00:05:06.428 "nbd_device": "/dev/nbd1", 00:05:06.428 "bdev_name": "Malloc1" 00:05:06.428 } 00:05:06.428 ]' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.428 { 00:05:06.428 "nbd_device": "/dev/nbd0", 00:05:06.428 "bdev_name": "Malloc0" 00:05:06.428 }, 00:05:06.428 { 00:05:06.428 "nbd_device": "/dev/nbd1", 00:05:06.428 "bdev_name": "Malloc1" 00:05:06.428 } 00:05:06.428 ]' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.428 /dev/nbd1' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.428 /dev/nbd1' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.428 256+0 records in 00:05:06.428 256+0 records out 00:05:06.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722456 s, 145 MB/s 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.428 256+0 records in 00:05:06.428 256+0 records out 00:05:06.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225496 s, 46.5 MB/s 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.428 256+0 records in 00:05:06.428 256+0 records out 00:05:06.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250721 s, 41.8 MB/s 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@51 -- # local i 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.428 00:17:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@41 -- # break 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.687 00:17:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@41 -- # break 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.255 00:17:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@65 -- # true 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.514 00:17:23 -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.514 00:17:23 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.773 00:17:23 -- event/event.sh@35 -- # sleep 3 00:05:07.773 [2024-09-29 00:17:23.604847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.032 [2024-09-29 00:17:23.652749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.032 [2024-09-29 00:17:23.652758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.032 [2024-09-29 00:17:23.680917] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.032 [2024-09-29 00:17:23.680969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.318 spdk_app_start Round 2 00:05:11.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.318 00:17:26 -- event/event.sh@23 -- # for i in {0..2} 00:05:11.318 00:17:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.318 00:17:26 -- event/event.sh@25 -- # waitforlisten 54905 /var/tmp/spdk-nbd.sock 00:05:11.318 00:17:26 -- common/autotest_common.sh@819 -- # '[' -z 54905 ']' 00:05:11.318 00:17:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.318 00:17:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.318 00:17:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.318 00:17:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.318 00:17:26 -- common/autotest_common.sh@10 -- # set +x 00:05:11.318 00:17:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.318 00:17:26 -- common/autotest_common.sh@852 -- # return 0 00:05:11.318 00:17:26 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.318 Malloc0 00:05:11.318 00:17:27 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.577 Malloc1 00:05:11.577 00:17:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.577 00:17:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.577 00:17:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.577 00:17:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.577 00:17:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.577 00:17:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@12 -- # local i 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.578 00:17:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.837 /dev/nbd0 00:05:11.837 00:17:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.837 00:17:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.837 00:17:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:11.837 00:17:27 -- common/autotest_common.sh@857 -- # local i 00:05:11.837 00:17:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:11.837 00:17:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:11.837 00:17:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:11.837 00:17:27 -- common/autotest_common.sh@861 -- # break 00:05:11.837 00:17:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:11.837 00:17:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:11.837 00:17:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.837 1+0 records in 00:05:11.837 1+0 records out 00:05:11.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259992 s, 15.8 MB/s 00:05:11.837 00:17:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.837 00:17:27 -- common/autotest_common.sh@874 -- # size=4096 00:05:11.837 00:17:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.837 00:17:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:11.837 00:17:27 -- common/autotest_common.sh@877 -- # return 0 00:05:11.837 00:17:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.837 00:17:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.837 00:17:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.096 /dev/nbd1 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.096 00:17:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:12.096 00:17:27 -- common/autotest_common.sh@857 -- # local i 00:05:12.096 00:17:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:12.096 00:17:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:12.096 00:17:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:12.096 00:17:27 -- common/autotest_common.sh@861 -- # break 00:05:12.096 00:17:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:12.096 00:17:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:12.096 00:17:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.096 1+0 records in 00:05:12.096 1+0 records out 00:05:12.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243561 s, 16.8 MB/s 00:05:12.096 00:17:27 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.096 00:17:27 -- common/autotest_common.sh@874 -- # size=4096 00:05:12.096 00:17:27 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.096 00:17:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:12.096 00:17:27 -- common/autotest_common.sh@877 -- # return 0 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.096 00:17:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.356 { 00:05:12.356 "nbd_device": "/dev/nbd0", 00:05:12.356 "bdev_name": "Malloc0" 00:05:12.356 }, 00:05:12.356 { 00:05:12.356 "nbd_device": "/dev/nbd1", 00:05:12.356 "bdev_name": "Malloc1" 00:05:12.356 } 00:05:12.356 ]' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.356 { 00:05:12.356 "nbd_device": "/dev/nbd0", 00:05:12.356 "bdev_name": "Malloc0" 00:05:12.356 }, 00:05:12.356 { 00:05:12.356 "nbd_device": "/dev/nbd1", 00:05:12.356 "bdev_name": "Malloc1" 00:05:12.356 } 00:05:12.356 ]' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.356 /dev/nbd1' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.356 /dev/nbd1' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.356 256+0 records in 00:05:12.356 256+0 records out 00:05:12.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105536 s, 99.4 MB/s 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.356 256+0 records in 00:05:12.356 256+0 records out 00:05:12.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188679 s, 55.6 MB/s 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.356 00:17:28 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.615 256+0 records in 00:05:12.615 256+0 records out 00:05:12.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311042 s, 33.7 MB/s 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@51 -- # local i 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.615 00:17:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@41 -- # break 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.874 00:17:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@41 -- # break 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.133 00:17:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@65 -- # true 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.400 00:17:29 -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.400 00:17:29 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.668 00:17:29 -- event/event.sh@35 -- # sleep 3 00:05:13.928 [2024-09-29 00:17:29.548664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.928 [2024-09-29 00:17:29.594610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.928 [2024-09-29 00:17:29.594620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.928 [2024-09-29 00:17:29.621761] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.928 [2024-09-29 00:17:29.621833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.219 00:17:32 -- event/event.sh@38 -- # waitforlisten 54905 /var/tmp/spdk-nbd.sock 00:05:17.219 00:17:32 -- common/autotest_common.sh@819 -- # '[' -z 54905 ']' 00:05:17.219 00:17:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.219 00:17:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.219 00:17:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.219 00:17:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.219 00:17:32 -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 00:17:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.219 00:17:32 -- common/autotest_common.sh@852 -- # return 0 00:05:17.219 00:17:32 -- event/event.sh@39 -- # killprocess 54905 00:05:17.219 00:17:32 -- common/autotest_common.sh@926 -- # '[' -z 54905 ']' 00:05:17.219 00:17:32 -- common/autotest_common.sh@930 -- # kill -0 54905 00:05:17.219 00:17:32 -- common/autotest_common.sh@931 -- # uname 00:05:17.219 00:17:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:17.219 00:17:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54905 00:05:17.219 00:17:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:17.219 00:17:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:17.219 killing process with pid 54905 00:05:17.219 00:17:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54905' 00:05:17.219 00:17:32 -- common/autotest_common.sh@945 -- # kill 54905 00:05:17.219 00:17:32 -- common/autotest_common.sh@950 -- # wait 54905 00:05:17.219 spdk_app_start is called in Round 0. 00:05:17.219 Shutdown signal received, stop current app iteration 00:05:17.219 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:17.219 spdk_app_start is called in Round 1. 00:05:17.219 Shutdown signal received, stop current app iteration 00:05:17.219 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:17.219 spdk_app_start is called in Round 2. 00:05:17.219 Shutdown signal received, stop current app iteration 00:05:17.219 Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 reinitialization... 00:05:17.219 spdk_app_start is called in Round 3. 00:05:17.219 Shutdown signal received, stop current app iteration 00:05:17.219 00:17:32 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:17.219 00:17:32 -- event/event.sh@42 -- # return 0 00:05:17.219 00:05:17.219 real 0m18.372s 00:05:17.219 user 0m41.905s 00:05:17.219 sys 0m2.465s 00:05:17.219 00:17:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.219 00:17:32 -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 ************************************ 00:05:17.219 END TEST app_repeat 00:05:17.219 ************************************ 00:05:17.219 00:17:32 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:17.219 00:17:32 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:17.219 00:17:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.219 00:17:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.219 00:17:32 -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 ************************************ 00:05:17.219 START TEST cpu_locks 00:05:17.219 ************************************ 00:05:17.219 00:17:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:17.219 * Looking for test storage... 00:05:17.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:17.219 00:17:33 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:17.219 00:17:33 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:17.219 00:17:33 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:17.219 00:17:33 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:17.219 00:17:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.219 00:17:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.219 00:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.219 ************************************ 00:05:17.219 START TEST default_locks 00:05:17.219 ************************************ 00:05:17.219 00:17:33 -- common/autotest_common.sh@1104 -- # default_locks 00:05:17.219 00:17:33 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55330 00:05:17.219 00:17:33 -- event/cpu_locks.sh@47 -- # waitforlisten 55330 00:05:17.219 00:17:33 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.219 00:17:33 -- common/autotest_common.sh@819 -- # '[' -z 55330 ']' 00:05:17.219 00:17:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.219 00:17:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.219 00:17:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.219 00:17:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.219 00:17:33 -- common/autotest_common.sh@10 -- # set +x 00:05:17.478 [2024-09-29 00:17:33.101317] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:17.478 [2024-09-29 00:17:33.101455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55330 ] 00:05:17.479 [2024-09-29 00:17:33.237180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.479 [2024-09-29 00:17:33.288641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.479 [2024-09-29 00:17:33.288838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.416 00:17:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:18.416 00:17:34 -- common/autotest_common.sh@852 -- # return 0 00:05:18.416 00:17:34 -- event/cpu_locks.sh@49 -- # locks_exist 55330 00:05:18.416 00:17:34 -- event/cpu_locks.sh@22 -- # lslocks -p 55330 00:05:18.416 00:17:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.675 00:17:34 -- event/cpu_locks.sh@50 -- # killprocess 55330 00:05:18.675 00:17:34 -- common/autotest_common.sh@926 -- # '[' -z 55330 ']' 00:05:18.675 00:17:34 -- common/autotest_common.sh@930 -- # kill -0 55330 00:05:18.675 00:17:34 -- common/autotest_common.sh@931 -- # uname 00:05:18.675 00:17:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.675 00:17:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55330 00:05:18.675 00:17:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.675 00:17:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.675 killing process with pid 55330 00:05:18.675 00:17:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55330' 00:05:18.675 00:17:34 -- common/autotest_common.sh@945 -- # kill 55330 00:05:18.675 00:17:34 -- common/autotest_common.sh@950 -- # wait 55330 00:05:18.935 00:17:34 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55330 00:05:18.935 00:17:34 -- common/autotest_common.sh@640 -- # local es=0 00:05:18.935 00:17:34 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55330 00:05:18.935 00:17:34 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:18.935 00:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:18.935 00:17:34 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:18.935 00:17:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:18.935 00:17:34 -- common/autotest_common.sh@643 -- # waitforlisten 55330 00:05:18.935 00:17:34 -- common/autotest_common.sh@819 -- # '[' -z 55330 ']' 00:05:18.935 00:17:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.935 00:17:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.935 00:17:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.935 00:17:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.935 00:17:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.935 ERROR: process (pid: 55330) is no longer running 00:05:18.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55330) - No such process 00:05:18.935 00:17:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:18.935 00:17:34 -- common/autotest_common.sh@852 -- # return 1 00:05:18.935 00:17:34 -- common/autotest_common.sh@643 -- # es=1 00:05:18.935 00:17:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:18.935 00:17:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:18.935 00:17:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:18.935 00:17:34 -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.935 00:17:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.935 00:17:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.935 00:17:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.935 00:05:18.935 real 0m1.655s 00:05:18.935 user 0m1.934s 00:05:18.935 sys 0m0.382s 00:05:18.935 00:17:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.935 ************************************ 00:05:18.935 00:17:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.935 END TEST default_locks 00:05:18.935 ************************************ 00:05:18.935 00:17:34 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.935 00:17:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.935 00:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.935 00:17:34 -- common/autotest_common.sh@10 -- # set +x 00:05:18.935 ************************************ 00:05:18.935 START TEST default_locks_via_rpc 00:05:18.935 ************************************ 00:05:18.935 00:17:34 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:18.935 00:17:34 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55377 00:05:18.935 00:17:34 -- event/cpu_locks.sh@63 -- # waitforlisten 55377 00:05:18.935 00:17:34 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.935 00:17:34 -- common/autotest_common.sh@819 -- # '[' -z 55377 ']' 00:05:18.935 00:17:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.935 00:17:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.935 00:17:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.935 00:17:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.935 00:17:34 -- common/autotest_common.sh@10 -- # set +x 00:05:19.194 [2024-09-29 00:17:34.805015] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:19.194 [2024-09-29 00:17:34.805614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55377 ] 00:05:19.194 [2024-09-29 00:17:34.941538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.195 [2024-09-29 00:17:34.990410] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.195 [2024-09-29 00:17:34.990562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.131 00:17:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.131 00:17:35 -- common/autotest_common.sh@852 -- # return 0 00:05:20.131 00:17:35 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.131 00:17:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.131 00:17:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.131 00:17:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.131 00:17:35 -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.131 00:17:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.131 00:17:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.131 00:17:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.131 00:17:35 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.131 00:17:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.131 00:17:35 -- common/autotest_common.sh@10 -- # set +x 00:05:20.131 00:17:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.131 00:17:35 -- event/cpu_locks.sh@71 -- # locks_exist 55377 00:05:20.131 00:17:35 -- event/cpu_locks.sh@22 -- # lslocks -p 55377 00:05:20.131 00:17:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.390 00:17:36 -- event/cpu_locks.sh@73 -- # killprocess 55377 00:05:20.390 00:17:36 -- common/autotest_common.sh@926 -- # '[' -z 55377 ']' 00:05:20.390 00:17:36 -- common/autotest_common.sh@930 -- # kill -0 55377 00:05:20.390 00:17:36 -- common/autotest_common.sh@931 -- # uname 00:05:20.649 00:17:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:20.650 00:17:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55377 00:05:20.650 00:17:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:20.650 00:17:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:20.650 killing process with pid 55377 00:05:20.650 00:17:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55377' 00:05:20.650 00:17:36 -- common/autotest_common.sh@945 -- # kill 55377 00:05:20.650 00:17:36 -- common/autotest_common.sh@950 -- # wait 55377 00:05:20.909 00:05:20.909 real 0m1.784s 00:05:20.909 user 0m2.055s 00:05:20.909 sys 0m0.457s 00:05:20.909 00:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.909 00:17:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.909 ************************************ 00:05:20.909 END TEST default_locks_via_rpc 00:05:20.909 ************************************ 00:05:20.909 00:17:36 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.909 00:17:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.909 00:17:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.909 00:17:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.909 ************************************ 00:05:20.909 START TEST non_locking_app_on_locked_coremask 00:05:20.909 ************************************ 00:05:20.909 00:17:36 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:20.909 00:17:36 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55428 00:05:20.909 00:17:36 -- event/cpu_locks.sh@81 -- # waitforlisten 55428 /var/tmp/spdk.sock 00:05:20.909 00:17:36 -- common/autotest_common.sh@819 -- # '[' -z 55428 ']' 00:05:20.909 00:17:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.909 00:17:36 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.909 00:17:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.909 00:17:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.909 00:17:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.909 00:17:36 -- common/autotest_common.sh@10 -- # set +x 00:05:20.909 [2024-09-29 00:17:36.637032] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:20.909 [2024-09-29 00:17:36.637130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55428 ] 00:05:21.168 [2024-09-29 00:17:36.773912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.168 [2024-09-29 00:17:36.822249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.168 [2024-09-29 00:17:36.822451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.105 00:17:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.105 00:17:37 -- common/autotest_common.sh@852 -- # return 0 00:05:22.105 00:17:37 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55444 00:05:22.105 00:17:37 -- event/cpu_locks.sh@85 -- # waitforlisten 55444 /var/tmp/spdk2.sock 00:05:22.105 00:17:37 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:22.105 00:17:37 -- common/autotest_common.sh@819 -- # '[' -z 55444 ']' 00:05:22.105 00:17:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.105 00:17:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.105 00:17:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.105 00:17:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.105 00:17:37 -- common/autotest_common.sh@10 -- # set +x 00:05:22.105 [2024-09-29 00:17:37.681841] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:22.105 [2024-09-29 00:17:37.681946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55444 ] 00:05:22.105 [2024-09-29 00:17:37.821418] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.105 [2024-09-29 00:17:37.821465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.105 [2024-09-29 00:17:37.918303] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.105 [2024-09-29 00:17:37.918482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.043 00:17:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.043 00:17:38 -- common/autotest_common.sh@852 -- # return 0 00:05:23.043 00:17:38 -- event/cpu_locks.sh@87 -- # locks_exist 55428 00:05:23.043 00:17:38 -- event/cpu_locks.sh@22 -- # lslocks -p 55428 00:05:23.043 00:17:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.312 00:17:39 -- event/cpu_locks.sh@89 -- # killprocess 55428 00:05:23.312 00:17:39 -- common/autotest_common.sh@926 -- # '[' -z 55428 ']' 00:05:23.312 00:17:39 -- common/autotest_common.sh@930 -- # kill -0 55428 00:05:23.312 00:17:39 -- common/autotest_common.sh@931 -- # uname 00:05:23.312 00:17:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:23.312 00:17:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55428 00:05:23.572 00:17:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:23.572 killing process with pid 55428 00:05:23.572 00:17:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:23.572 00:17:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55428' 00:05:23.572 00:17:39 -- common/autotest_common.sh@945 -- # kill 55428 00:05:23.572 00:17:39 -- common/autotest_common.sh@950 -- # wait 55428 00:05:24.140 00:17:39 -- event/cpu_locks.sh@90 -- # killprocess 55444 00:05:24.140 00:17:39 -- common/autotest_common.sh@926 -- # '[' -z 55444 ']' 00:05:24.140 00:17:39 -- common/autotest_common.sh@930 -- # kill -0 55444 00:05:24.140 00:17:39 -- common/autotest_common.sh@931 -- # uname 00:05:24.140 00:17:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:24.140 00:17:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55444 00:05:24.140 00:17:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:24.140 killing process with pid 55444 00:05:24.140 00:17:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:24.140 00:17:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55444' 00:05:24.140 00:17:39 -- common/autotest_common.sh@945 -- # kill 55444 00:05:24.140 00:17:39 -- common/autotest_common.sh@950 -- # wait 55444 00:05:24.140 00:05:24.140 real 0m3.386s 00:05:24.140 user 0m4.082s 00:05:24.140 sys 0m0.721s 00:05:24.140 00:17:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.140 00:17:39 -- common/autotest_common.sh@10 -- # set +x 00:05:24.140 ************************************ 00:05:24.140 END TEST non_locking_app_on_locked_coremask 00:05:24.140 ************************************ 00:05:24.400 00:17:40 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.400 00:17:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.400 00:17:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.400 00:17:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.400 ************************************ 00:05:24.400 START TEST locking_app_on_unlocked_coremask 00:05:24.400 ************************************ 00:05:24.400 00:17:40 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:24.400 00:17:40 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55501 00:05:24.400 00:17:40 -- event/cpu_locks.sh@99 -- # waitforlisten 55501 /var/tmp/spdk.sock 00:05:24.400 00:17:40 -- common/autotest_common.sh@819 -- # '[' -z 55501 ']' 00:05:24.400 00:17:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.400 00:17:40 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.400 00:17:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.400 00:17:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.400 00:17:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.400 00:17:40 -- common/autotest_common.sh@10 -- # set +x 00:05:24.400 [2024-09-29 00:17:40.074324] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:24.400 [2024-09-29 00:17:40.074451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55501 ] 00:05:24.400 [2024-09-29 00:17:40.214692] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.400 [2024-09-29 00:17:40.214754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.659 [2024-09-29 00:17:40.284635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.659 [2024-09-29 00:17:40.284855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.228 00:17:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.228 00:17:41 -- common/autotest_common.sh@852 -- # return 0 00:05:25.228 00:17:41 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55517 00:05:25.228 00:17:41 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.228 00:17:41 -- event/cpu_locks.sh@103 -- # waitforlisten 55517 /var/tmp/spdk2.sock 00:05:25.228 00:17:41 -- common/autotest_common.sh@819 -- # '[' -z 55517 ']' 00:05:25.228 00:17:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.228 00:17:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.228 00:17:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.228 00:17:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.228 00:17:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.486 [2024-09-29 00:17:41.131551] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:25.486 [2024-09-29 00:17:41.132094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55517 ] 00:05:25.486 [2024-09-29 00:17:41.271671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.745 [2024-09-29 00:17:41.381538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.745 [2024-09-29 00:17:41.381730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.314 00:17:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.314 00:17:42 -- common/autotest_common.sh@852 -- # return 0 00:05:26.314 00:17:42 -- event/cpu_locks.sh@105 -- # locks_exist 55517 00:05:26.314 00:17:42 -- event/cpu_locks.sh@22 -- # lslocks -p 55517 00:05:26.314 00:17:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.250 00:17:42 -- event/cpu_locks.sh@107 -- # killprocess 55501 00:05:27.250 00:17:42 -- common/autotest_common.sh@926 -- # '[' -z 55501 ']' 00:05:27.250 00:17:42 -- common/autotest_common.sh@930 -- # kill -0 55501 00:05:27.250 00:17:42 -- common/autotest_common.sh@931 -- # uname 00:05:27.250 00:17:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.250 00:17:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55501 00:05:27.250 killing process with pid 55501 00:05:27.250 00:17:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.250 00:17:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.250 00:17:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55501' 00:05:27.250 00:17:42 -- common/autotest_common.sh@945 -- # kill 55501 00:05:27.250 00:17:42 -- common/autotest_common.sh@950 -- # wait 55501 00:05:27.818 00:17:43 -- event/cpu_locks.sh@108 -- # killprocess 55517 00:05:27.818 00:17:43 -- common/autotest_common.sh@926 -- # '[' -z 55517 ']' 00:05:27.818 00:17:43 -- common/autotest_common.sh@930 -- # kill -0 55517 00:05:27.818 00:17:43 -- common/autotest_common.sh@931 -- # uname 00:05:27.818 00:17:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.818 00:17:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55517 00:05:27.818 killing process with pid 55517 00:05:27.818 00:17:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.818 00:17:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.818 00:17:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55517' 00:05:27.818 00:17:43 -- common/autotest_common.sh@945 -- # kill 55517 00:05:27.818 00:17:43 -- common/autotest_common.sh@950 -- # wait 55517 00:05:28.078 00:05:28.078 real 0m3.761s 00:05:28.078 user 0m4.463s 00:05:28.078 sys 0m0.911s 00:05:28.078 00:17:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.078 00:17:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.078 ************************************ 00:05:28.078 END TEST locking_app_on_unlocked_coremask 00:05:28.078 ************************************ 00:05:28.078 00:17:43 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:28.078 00:17:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.078 00:17:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.078 00:17:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.078 ************************************ 00:05:28.078 START TEST locking_app_on_locked_coremask 00:05:28.078 ************************************ 00:05:28.078 00:17:43 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:28.078 00:17:43 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55584 00:05:28.078 00:17:43 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.078 00:17:43 -- event/cpu_locks.sh@116 -- # waitforlisten 55584 /var/tmp/spdk.sock 00:05:28.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.078 00:17:43 -- common/autotest_common.sh@819 -- # '[' -z 55584 ']' 00:05:28.078 00:17:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.078 00:17:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.078 00:17:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.078 00:17:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.078 00:17:43 -- common/autotest_common.sh@10 -- # set +x 00:05:28.078 [2024-09-29 00:17:43.877422] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:28.078 [2024-09-29 00:17:43.877672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55584 ] 00:05:28.337 [2024-09-29 00:17:44.005969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.337 [2024-09-29 00:17:44.056834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.337 [2024-09-29 00:17:44.057229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.271 00:17:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.271 00:17:44 -- common/autotest_common.sh@852 -- # return 0 00:05:29.271 00:17:44 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55600 00:05:29.271 00:17:44 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.271 00:17:44 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55600 /var/tmp/spdk2.sock 00:05:29.271 00:17:44 -- common/autotest_common.sh@640 -- # local es=0 00:05:29.271 00:17:44 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55600 /var/tmp/spdk2.sock 00:05:29.271 00:17:44 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:29.271 00:17:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:29.271 00:17:44 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:29.271 00:17:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:29.271 00:17:44 -- common/autotest_common.sh@643 -- # waitforlisten 55600 /var/tmp/spdk2.sock 00:05:29.271 00:17:44 -- common/autotest_common.sh@819 -- # '[' -z 55600 ']' 00:05:29.271 00:17:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.271 00:17:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.271 00:17:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.271 00:17:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.271 00:17:44 -- common/autotest_common.sh@10 -- # set +x 00:05:29.271 [2024-09-29 00:17:44.926690] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:29.271 [2024-09-29 00:17:44.927693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55600 ] 00:05:29.271 [2024-09-29 00:17:45.067586] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55584 has claimed it. 00:05:29.271 [2024-09-29 00:17:45.067657] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.862 ERROR: process (pid: 55600) is no longer running 00:05:29.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55600) - No such process 00:05:29.862 00:17:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.862 00:17:45 -- common/autotest_common.sh@852 -- # return 1 00:05:29.862 00:17:45 -- common/autotest_common.sh@643 -- # es=1 00:05:29.862 00:17:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:29.862 00:17:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:29.862 00:17:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:29.862 00:17:45 -- event/cpu_locks.sh@122 -- # locks_exist 55584 00:05:29.862 00:17:45 -- event/cpu_locks.sh@22 -- # lslocks -p 55584 00:05:29.862 00:17:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.429 00:17:46 -- event/cpu_locks.sh@124 -- # killprocess 55584 00:05:30.429 00:17:46 -- common/autotest_common.sh@926 -- # '[' -z 55584 ']' 00:05:30.429 00:17:46 -- common/autotest_common.sh@930 -- # kill -0 55584 00:05:30.429 00:17:46 -- common/autotest_common.sh@931 -- # uname 00:05:30.429 00:17:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.429 00:17:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55584 00:05:30.429 00:17:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:30.429 00:17:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:30.429 killing process with pid 55584 00:05:30.429 00:17:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55584' 00:05:30.429 00:17:46 -- common/autotest_common.sh@945 -- # kill 55584 00:05:30.429 00:17:46 -- common/autotest_common.sh@950 -- # wait 55584 00:05:30.687 00:05:30.687 real 0m2.492s 00:05:30.687 user 0m3.038s 00:05:30.687 sys 0m0.492s 00:05:30.687 00:17:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.687 ************************************ 00:05:30.687 END TEST locking_app_on_locked_coremask 00:05:30.687 ************************************ 00:05:30.687 00:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.687 00:17:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.687 00:17:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.687 00:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.687 00:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.687 ************************************ 00:05:30.687 START TEST locking_overlapped_coremask 00:05:30.687 ************************************ 00:05:30.687 00:17:46 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:30.687 00:17:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55644 00:05:30.687 00:17:46 -- event/cpu_locks.sh@133 -- # waitforlisten 55644 /var/tmp/spdk.sock 00:05:30.687 00:17:46 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.687 00:17:46 -- common/autotest_common.sh@819 -- # '[' -z 55644 ']' 00:05:30.687 00:17:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.687 00:17:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.687 00:17:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.687 00:17:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.687 00:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.687 [2024-09-29 00:17:46.428800] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:30.687 [2024-09-29 00:17:46.428902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55644 ] 00:05:30.944 [2024-09-29 00:17:46.565907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.944 [2024-09-29 00:17:46.625206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.944 [2024-09-29 00:17:46.625812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.944 [2024-09-29 00:17:46.625872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.944 [2024-09-29 00:17:46.625879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.876 00:17:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.876 00:17:47 -- common/autotest_common.sh@852 -- # return 0 00:05:31.876 00:17:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55662 00:05:31.876 00:17:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55662 /var/tmp/spdk2.sock 00:05:31.876 00:17:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.876 00:17:47 -- common/autotest_common.sh@640 -- # local es=0 00:05:31.876 00:17:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 55662 /var/tmp/spdk2.sock 00:05:31.876 00:17:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:31.876 00:17:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:31.876 00:17:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:31.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.876 00:17:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:31.876 00:17:47 -- common/autotest_common.sh@643 -- # waitforlisten 55662 /var/tmp/spdk2.sock 00:05:31.876 00:17:47 -- common/autotest_common.sh@819 -- # '[' -z 55662 ']' 00:05:31.877 00:17:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.877 00:17:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.877 00:17:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.877 00:17:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.877 00:17:47 -- common/autotest_common.sh@10 -- # set +x 00:05:31.877 [2024-09-29 00:17:47.453268] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:31.877 [2024-09-29 00:17:47.453515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55662 ] 00:05:31.877 [2024-09-29 00:17:47.587773] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55644 has claimed it. 00:05:31.877 [2024-09-29 00:17:47.587843] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.443 ERROR: process (pid: 55662) is no longer running 00:05:32.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (55662) - No such process 00:05:32.443 00:17:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.443 00:17:48 -- common/autotest_common.sh@852 -- # return 1 00:05:32.443 00:17:48 -- common/autotest_common.sh@643 -- # es=1 00:05:32.443 00:17:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:32.443 00:17:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:32.443 00:17:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:32.443 00:17:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.444 00:17:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.444 00:17:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.444 00:17:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.444 00:17:48 -- event/cpu_locks.sh@141 -- # killprocess 55644 00:05:32.444 00:17:48 -- common/autotest_common.sh@926 -- # '[' -z 55644 ']' 00:05:32.444 00:17:48 -- common/autotest_common.sh@930 -- # kill -0 55644 00:05:32.444 00:17:48 -- common/autotest_common.sh@931 -- # uname 00:05:32.444 00:17:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:32.444 00:17:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55644 00:05:32.444 killing process with pid 55644 00:05:32.444 00:17:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:32.444 00:17:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:32.444 00:17:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55644' 00:05:32.444 00:17:48 -- common/autotest_common.sh@945 -- # kill 55644 00:05:32.444 00:17:48 -- common/autotest_common.sh@950 -- # wait 55644 00:05:32.702 ************************************ 00:05:32.702 END TEST locking_overlapped_coremask 00:05:32.702 ************************************ 00:05:32.702 00:05:32.702 real 0m2.107s 00:05:32.702 user 0m5.983s 00:05:32.702 sys 0m0.332s 00:05:32.702 00:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.702 00:17:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.702 00:17:48 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.702 00:17:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.702 00:17:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.702 00:17:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.702 ************************************ 00:05:32.702 START TEST locking_overlapped_coremask_via_rpc 00:05:32.702 ************************************ 00:05:32.702 00:17:48 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:32.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.702 00:17:48 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55703 00:05:32.702 00:17:48 -- event/cpu_locks.sh@149 -- # waitforlisten 55703 /var/tmp/spdk.sock 00:05:32.702 00:17:48 -- common/autotest_common.sh@819 -- # '[' -z 55703 ']' 00:05:32.702 00:17:48 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.702 00:17:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.702 00:17:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.702 00:17:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.702 00:17:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.702 00:17:48 -- common/autotest_common.sh@10 -- # set +x 00:05:32.961 [2024-09-29 00:17:48.574696] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:32.961 [2024-09-29 00:17:48.574784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55703 ] 00:05:32.961 [2024-09-29 00:17:48.705858] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.961 [2024-09-29 00:17:48.705911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.961 [2024-09-29 00:17:48.763834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.961 [2024-09-29 00:17:48.764594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.961 [2024-09-29 00:17:48.764662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.961 [2024-09-29 00:17:48.764667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.896 00:17:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.896 00:17:49 -- common/autotest_common.sh@852 -- # return 0 00:05:33.896 00:17:49 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.896 00:17:49 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55721 00:05:33.896 00:17:49 -- event/cpu_locks.sh@153 -- # waitforlisten 55721 /var/tmp/spdk2.sock 00:05:33.896 00:17:49 -- common/autotest_common.sh@819 -- # '[' -z 55721 ']' 00:05:33.896 00:17:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.896 00:17:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.896 00:17:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.896 00:17:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.896 00:17:49 -- common/autotest_common.sh@10 -- # set +x 00:05:33.896 [2024-09-29 00:17:49.611216] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:33.896 [2024-09-29 00:17:49.611478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55721 ] 00:05:34.155 [2024-09-29 00:17:49.755934] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.155 [2024-09-29 00:17:49.756005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.155 [2024-09-29 00:17:49.871059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.155 [2024-09-29 00:17:49.871384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.155 [2024-09-29 00:17:49.871513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:34.155 [2024-09-29 00:17:49.871923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.090 00:17:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.090 00:17:50 -- common/autotest_common.sh@852 -- # return 0 00:05:35.090 00:17:50 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.090 00:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.090 00:17:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.090 00:17:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.091 00:17:50 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.091 00:17:50 -- common/autotest_common.sh@640 -- # local es=0 00:05:35.091 00:17:50 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.091 00:17:50 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:35.091 00:17:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:35.091 00:17:50 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:35.091 00:17:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:35.091 00:17:50 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.091 00:17:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.091 00:17:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.091 [2024-09-29 00:17:50.621632] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55703 has claimed it. 00:05:35.091 request: 00:05:35.091 { 00:05:35.091 "method": "framework_enable_cpumask_locks", 00:05:35.091 "req_id": 1 00:05:35.091 } 00:05:35.091 Got JSON-RPC error response 00:05:35.091 response: 00:05:35.091 { 00:05:35.091 "code": -32603, 00:05:35.091 "message": "Failed to claim CPU core: 2" 00:05:35.091 } 00:05:35.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.091 00:17:50 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:35.091 00:17:50 -- common/autotest_common.sh@643 -- # es=1 00:05:35.091 00:17:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:35.091 00:17:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:35.091 00:17:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:35.091 00:17:50 -- event/cpu_locks.sh@158 -- # waitforlisten 55703 /var/tmp/spdk.sock 00:05:35.091 00:17:50 -- common/autotest_common.sh@819 -- # '[' -z 55703 ']' 00:05:35.091 00:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.091 00:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.091 00:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.091 00:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.091 00:17:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.091 00:17:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.091 00:17:50 -- common/autotest_common.sh@852 -- # return 0 00:05:35.091 00:17:50 -- event/cpu_locks.sh@159 -- # waitforlisten 55721 /var/tmp/spdk2.sock 00:05:35.091 00:17:50 -- common/autotest_common.sh@819 -- # '[' -z 55721 ']' 00:05:35.091 00:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.091 00:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.091 00:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.091 00:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.091 00:17:50 -- common/autotest_common.sh@10 -- # set +x 00:05:35.350 00:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.350 00:17:51 -- common/autotest_common.sh@852 -- # return 0 00:05:35.350 00:17:51 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:35.350 00:17:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.350 00:17:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.350 ************************************ 00:05:35.350 END TEST locking_overlapped_coremask_via_rpc 00:05:35.350 ************************************ 00:05:35.350 00:17:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.350 00:05:35.350 real 0m2.614s 00:05:35.350 user 0m1.374s 00:05:35.350 sys 0m0.167s 00:05:35.350 00:17:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.350 00:17:51 -- common/autotest_common.sh@10 -- # set +x 00:05:35.350 00:17:51 -- event/cpu_locks.sh@174 -- # cleanup 00:05:35.350 00:17:51 -- event/cpu_locks.sh@15 -- # [[ -z 55703 ]] 00:05:35.350 00:17:51 -- event/cpu_locks.sh@15 -- # killprocess 55703 00:05:35.350 00:17:51 -- common/autotest_common.sh@926 -- # '[' -z 55703 ']' 00:05:35.350 00:17:51 -- common/autotest_common.sh@930 -- # kill -0 55703 00:05:35.350 00:17:51 -- common/autotest_common.sh@931 -- # uname 00:05:35.350 00:17:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.350 00:17:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55703 00:05:35.609 killing process with pid 55703 00:05:35.609 00:17:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.609 00:17:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.609 00:17:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55703' 00:05:35.609 00:17:51 -- common/autotest_common.sh@945 -- # kill 55703 00:05:35.609 00:17:51 -- common/autotest_common.sh@950 -- # wait 55703 00:05:35.867 00:17:51 -- event/cpu_locks.sh@16 -- # [[ -z 55721 ]] 00:05:35.867 00:17:51 -- event/cpu_locks.sh@16 -- # killprocess 55721 00:05:35.867 00:17:51 -- common/autotest_common.sh@926 -- # '[' -z 55721 ']' 00:05:35.867 00:17:51 -- common/autotest_common.sh@930 -- # kill -0 55721 00:05:35.867 00:17:51 -- common/autotest_common.sh@931 -- # uname 00:05:35.867 00:17:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.867 00:17:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55721 00:05:35.867 killing process with pid 55721 00:05:35.867 00:17:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:35.867 00:17:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:35.867 00:17:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55721' 00:05:35.867 00:17:51 -- common/autotest_common.sh@945 -- # kill 55721 00:05:35.867 00:17:51 -- common/autotest_common.sh@950 -- # wait 55721 00:05:36.126 00:17:51 -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.126 00:17:51 -- event/cpu_locks.sh@1 -- # cleanup 00:05:36.126 00:17:51 -- event/cpu_locks.sh@15 -- # [[ -z 55703 ]] 00:05:36.126 00:17:51 -- event/cpu_locks.sh@15 -- # killprocess 55703 00:05:36.126 00:17:51 -- common/autotest_common.sh@926 -- # '[' -z 55703 ']' 00:05:36.126 00:17:51 -- common/autotest_common.sh@930 -- # kill -0 55703 00:05:36.126 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55703) - No such process 00:05:36.126 Process with pid 55703 is not found 00:05:36.126 00:17:51 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55703 is not found' 00:05:36.126 00:17:51 -- event/cpu_locks.sh@16 -- # [[ -z 55721 ]] 00:05:36.126 Process with pid 55721 is not found 00:05:36.126 00:17:51 -- event/cpu_locks.sh@16 -- # killprocess 55721 00:05:36.126 00:17:51 -- common/autotest_common.sh@926 -- # '[' -z 55721 ']' 00:05:36.126 00:17:51 -- common/autotest_common.sh@930 -- # kill -0 55721 00:05:36.126 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (55721) - No such process 00:05:36.126 00:17:51 -- common/autotest_common.sh@953 -- # echo 'Process with pid 55721 is not found' 00:05:36.126 00:17:51 -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.126 00:05:36.126 real 0m18.870s 00:05:36.126 user 0m34.898s 00:05:36.126 sys 0m4.087s 00:05:36.126 00:17:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.126 00:17:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.126 ************************************ 00:05:36.126 END TEST cpu_locks 00:05:36.126 ************************************ 00:05:36.126 ************************************ 00:05:36.126 END TEST event 00:05:36.126 ************************************ 00:05:36.126 00:05:36.126 real 0m45.941s 00:05:36.126 user 1m31.982s 00:05:36.126 sys 0m7.205s 00:05:36.126 00:17:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.126 00:17:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.126 00:17:51 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.126 00:17:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.126 00:17:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.126 00:17:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.126 ************************************ 00:05:36.126 START TEST thread 00:05:36.126 ************************************ 00:05:36.126 00:17:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:36.385 * Looking for test storage... 00:05:36.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:36.385 00:17:51 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.385 00:17:51 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:36.385 00:17:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.385 00:17:51 -- common/autotest_common.sh@10 -- # set +x 00:05:36.385 ************************************ 00:05:36.385 START TEST thread_poller_perf 00:05:36.385 ************************************ 00:05:36.385 00:17:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.385 [2024-09-29 00:17:52.013701] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:36.385 [2024-09-29 00:17:52.013971] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55843 ] 00:05:36.385 [2024-09-29 00:17:52.150098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.385 [2024-09-29 00:17:52.200917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.385 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:37.764 ====================================== 00:05:37.764 busy:2208536820 (cyc) 00:05:37.764 total_run_count: 361000 00:05:37.764 tsc_hz: 2200000000 (cyc) 00:05:37.764 ====================================== 00:05:37.764 poller_cost: 6117 (cyc), 2780 (nsec) 00:05:37.764 00:05:37.764 real 0m1.297s 00:05:37.764 ************************************ 00:05:37.764 END TEST thread_poller_perf 00:05:37.764 ************************************ 00:05:37.764 user 0m1.153s 00:05:37.764 sys 0m0.036s 00:05:37.764 00:17:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.764 00:17:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.764 00:17:53 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.764 00:17:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:37.764 00:17:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.764 00:17:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.764 ************************************ 00:05:37.764 START TEST thread_poller_perf 00:05:37.764 ************************************ 00:05:37.764 00:17:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.764 [2024-09-29 00:17:53.365280] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:37.764 [2024-09-29 00:17:53.365397] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55873 ] 00:05:37.764 [2024-09-29 00:17:53.501664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.764 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.764 [2024-09-29 00:17:53.551108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.142 ====================================== 00:05:39.142 busy:2203149894 (cyc) 00:05:39.142 total_run_count: 4981000 00:05:39.142 tsc_hz: 2200000000 (cyc) 00:05:39.142 ====================================== 00:05:39.142 poller_cost: 442 (cyc), 200 (nsec) 00:05:39.142 00:05:39.142 real 0m1.290s 00:05:39.142 user 0m1.147s 00:05:39.142 sys 0m0.036s 00:05:39.142 00:17:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.142 ************************************ 00:05:39.142 END TEST thread_poller_perf 00:05:39.142 ************************************ 00:05:39.142 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.142 00:17:54 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:39.142 ************************************ 00:05:39.142 END TEST thread 00:05:39.142 ************************************ 00:05:39.142 00:05:39.142 real 0m2.771s 00:05:39.142 user 0m2.370s 00:05:39.142 sys 0m0.182s 00:05:39.142 00:17:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.142 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.142 00:17:54 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:39.142 00:17:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.142 00:17:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.142 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.142 ************************************ 00:05:39.142 START TEST accel 00:05:39.142 ************************************ 00:05:39.142 00:17:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:39.142 * Looking for test storage... 00:05:39.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:39.142 00:17:54 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:39.142 00:17:54 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:39.142 00:17:54 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.142 00:17:54 -- accel/accel.sh@59 -- # spdk_tgt_pid=55946 00:05:39.142 00:17:54 -- accel/accel.sh@60 -- # waitforlisten 55946 00:05:39.142 00:17:54 -- common/autotest_common.sh@819 -- # '[' -z 55946 ']' 00:05:39.142 00:17:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.142 00:17:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.142 00:17:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.142 00:17:54 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:39.142 00:17:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.142 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:05:39.142 00:17:54 -- accel/accel.sh@58 -- # build_accel_config 00:05:39.142 00:17:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.142 00:17:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.142 00:17:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.142 00:17:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.142 00:17:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.142 00:17:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.142 00:17:54 -- accel/accel.sh@42 -- # jq -r . 00:05:39.142 [2024-09-29 00:17:54.872756] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:39.142 [2024-09-29 00:17:54.873050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55946 ] 00:05:39.402 [2024-09-29 00:17:55.010165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.402 [2024-09-29 00:17:55.062861] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.402 [2024-09-29 00:17:55.063019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.341 00:17:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.341 00:17:55 -- common/autotest_common.sh@852 -- # return 0 00:05:40.341 00:17:55 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:40.341 00:17:55 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:40.341 00:17:55 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:40.341 00:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.341 00:17:55 -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 00:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # IFS== 00:05:40.341 00:17:55 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.341 00:17:55 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.341 00:17:55 -- accel/accel.sh@67 -- # killprocess 55946 00:05:40.341 00:17:55 -- common/autotest_common.sh@926 -- # '[' -z 55946 ']' 00:05:40.341 00:17:55 -- common/autotest_common.sh@930 -- # kill -0 55946 00:05:40.341 00:17:55 -- common/autotest_common.sh@931 -- # uname 00:05:40.341 00:17:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.341 00:17:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55946 00:05:40.341 00:17:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.341 00:17:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.341 00:17:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55946' 00:05:40.341 killing process with pid 55946 00:05:40.341 00:17:55 -- common/autotest_common.sh@945 -- # kill 55946 00:05:40.341 00:17:55 -- common/autotest_common.sh@950 -- # wait 55946 00:05:40.601 00:17:56 -- accel/accel.sh@68 -- # trap - ERR 00:05:40.601 00:17:56 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:40.601 00:17:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:40.601 00:17:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.601 00:17:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.601 00:17:56 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:40.601 00:17:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.601 00:17:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.601 00:17:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.601 00:17:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.601 00:17:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.601 00:17:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.601 00:17:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.601 00:17:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.601 00:17:56 -- accel/accel.sh@42 -- # jq -r . 00:05:40.601 00:17:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.601 00:17:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.601 00:17:56 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.601 00:17:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:40.601 00:17:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.601 00:17:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.601 ************************************ 00:05:40.601 START TEST accel_missing_filename 00:05:40.601 ************************************ 00:05:40.601 00:17:56 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:40.601 00:17:56 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.601 00:17:56 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.601 00:17:56 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:40.601 00:17:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.601 00:17:56 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:40.601 00:17:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.601 00:17:56 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:40.601 00:17:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.601 00:17:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.601 00:17:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.601 00:17:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.601 00:17:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.601 00:17:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.601 00:17:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.601 00:17:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.601 00:17:56 -- accel/accel.sh@42 -- # jq -r . 00:05:40.601 [2024-09-29 00:17:56.295999] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:40.601 [2024-09-29 00:17:56.296240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55998 ] 00:05:40.601 [2024-09-29 00:17:56.422964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.861 [2024-09-29 00:17:56.472845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.861 [2024-09-29 00:17:56.501344] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.861 [2024-09-29 00:17:56.540899] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:40.861 A filename is required. 00:05:40.861 00:17:56 -- common/autotest_common.sh@643 -- # es=234 00:05:40.861 00:17:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:40.861 00:17:56 -- common/autotest_common.sh@652 -- # es=106 00:05:40.861 00:17:56 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:40.861 00:17:56 -- common/autotest_common.sh@660 -- # es=1 00:05:40.861 00:17:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:40.861 00:05:40.861 real 0m0.353s 00:05:40.861 user 0m0.235s 00:05:40.861 sys 0m0.067s 00:05:40.861 00:17:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.861 ************************************ 00:05:40.861 END TEST accel_missing_filename 00:05:40.861 ************************************ 00:05:40.861 00:17:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.861 00:17:56 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.861 00:17:56 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:40.861 00:17:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.861 00:17:56 -- common/autotest_common.sh@10 -- # set +x 00:05:40.861 ************************************ 00:05:40.861 START TEST accel_compress_verify 00:05:40.861 ************************************ 00:05:40.861 00:17:56 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.861 00:17:56 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.861 00:17:56 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.861 00:17:56 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:40.861 00:17:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.861 00:17:56 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:40.861 00:17:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.861 00:17:56 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.861 00:17:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:40.861 00:17:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.861 00:17:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.861 00:17:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.861 00:17:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.861 00:17:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.861 00:17:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.861 00:17:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.861 00:17:56 -- accel/accel.sh@42 -- # jq -r . 00:05:40.861 [2024-09-29 00:17:56.700524] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:40.861 [2024-09-29 00:17:56.700614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56017 ] 00:05:41.120 [2024-09-29 00:17:56.837002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.120 [2024-09-29 00:17:56.893480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.120 [2024-09-29 00:17:56.923734] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.120 [2024-09-29 00:17:56.962953] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:41.380 00:05:41.380 Compression does not support the verify option, aborting. 00:05:41.380 ************************************ 00:05:41.380 END TEST accel_compress_verify 00:05:41.380 ************************************ 00:05:41.380 00:17:57 -- common/autotest_common.sh@643 -- # es=161 00:05:41.380 00:17:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.380 00:17:57 -- common/autotest_common.sh@652 -- # es=33 00:05:41.380 00:17:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:41.380 00:17:57 -- common/autotest_common.sh@660 -- # es=1 00:05:41.380 00:17:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.380 00:05:41.380 real 0m0.374s 00:05:41.380 user 0m0.246s 00:05:41.380 sys 0m0.077s 00:05:41.380 00:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.380 00:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.380 00:17:57 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:41.380 00:17:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:41.380 00:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.380 00:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.380 ************************************ 00:05:41.380 START TEST accel_wrong_workload 00:05:41.380 ************************************ 00:05:41.380 00:17:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:41.380 00:17:57 -- common/autotest_common.sh@640 -- # local es=0 00:05:41.380 00:17:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:41.380 00:17:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:41.380 00:17:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.380 00:17:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:41.380 00:17:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.380 00:17:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:41.380 00:17:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:41.380 00:17:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.380 00:17:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.380 00:17:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.380 00:17:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.380 00:17:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.380 00:17:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.380 00:17:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.380 00:17:57 -- accel/accel.sh@42 -- # jq -r . 00:05:41.380 Unsupported workload type: foobar 00:05:41.380 [2024-09-29 00:17:57.121044] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:41.380 accel_perf options: 00:05:41.380 [-h help message] 00:05:41.380 [-q queue depth per core] 00:05:41.380 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.380 [-T number of threads per core 00:05:41.380 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.380 [-t time in seconds] 00:05:41.380 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.380 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.380 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.380 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.380 [-S for crc32c workload, use this seed value (default 0) 00:05:41.380 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.380 [-f for fill workload, use this BYTE value (default 255) 00:05:41.380 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.380 [-y verify result if this switch is on] 00:05:41.380 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.380 Can be used to spread operations across a wider range of memory. 00:05:41.380 00:17:57 -- common/autotest_common.sh@643 -- # es=1 00:05:41.380 00:17:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.380 00:17:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:41.380 ************************************ 00:05:41.380 END TEST accel_wrong_workload 00:05:41.380 ************************************ 00:05:41.380 00:17:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.380 00:05:41.380 real 0m0.033s 00:05:41.380 user 0m0.016s 00:05:41.380 sys 0m0.016s 00:05:41.380 00:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.380 00:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.380 00:17:57 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.380 00:17:57 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:41.380 00:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.380 00:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.380 ************************************ 00:05:41.380 START TEST accel_negative_buffers 00:05:41.380 ************************************ 00:05:41.380 00:17:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.380 00:17:57 -- common/autotest_common.sh@640 -- # local es=0 00:05:41.380 00:17:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.380 00:17:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:41.380 00:17:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.380 00:17:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:41.380 00:17:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.380 00:17:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.380 00:17:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.380 00:17:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.380 00:17:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.380 00:17:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.380 00:17:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.380 00:17:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.380 00:17:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.380 00:17:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.380 00:17:57 -- accel/accel.sh@42 -- # jq -r . 00:05:41.380 -x option must be non-negative. 00:05:41.380 [2024-09-29 00:17:57.195892] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.380 accel_perf options: 00:05:41.380 [-h help message] 00:05:41.380 [-q queue depth per core] 00:05:41.380 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.380 [-T number of threads per core 00:05:41.380 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.380 [-t time in seconds] 00:05:41.380 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.380 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.380 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.380 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.380 [-S for crc32c workload, use this seed value (default 0) 00:05:41.380 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.380 [-f for fill workload, use this BYTE value (default 255) 00:05:41.380 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.380 [-y verify result if this switch is on] 00:05:41.380 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.380 Can be used to spread operations across a wider range of memory. 00:05:41.380 ************************************ 00:05:41.380 END TEST accel_negative_buffers 00:05:41.380 ************************************ 00:05:41.381 00:17:57 -- common/autotest_common.sh@643 -- # es=1 00:05:41.381 00:17:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.381 00:17:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:41.381 00:17:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.381 00:05:41.381 real 0m0.027s 00:05:41.381 user 0m0.017s 00:05:41.381 sys 0m0.010s 00:05:41.381 00:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.381 00:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.640 00:17:57 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.640 00:17:57 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:41.640 00:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.640 00:17:57 -- common/autotest_common.sh@10 -- # set +x 00:05:41.640 ************************************ 00:05:41.640 START TEST accel_crc32c 00:05:41.640 ************************************ 00:05:41.640 00:17:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.640 00:17:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.640 00:17:57 -- accel/accel.sh@17 -- # local accel_module 00:05:41.640 00:17:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.640 00:17:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.640 00:17:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.640 00:17:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.640 00:17:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.640 00:17:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.640 00:17:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.640 00:17:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.640 00:17:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.640 00:17:57 -- accel/accel.sh@42 -- # jq -r . 00:05:41.640 [2024-09-29 00:17:57.273812] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:41.640 [2024-09-29 00:17:57.273893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56081 ] 00:05:41.640 [2024-09-29 00:17:57.409532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.640 [2024-09-29 00:17:57.458469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.019 00:17:58 -- accel/accel.sh@18 -- # out=' 00:05:43.019 SPDK Configuration: 00:05:43.019 Core mask: 0x1 00:05:43.019 00:05:43.019 Accel Perf Configuration: 00:05:43.019 Workload Type: crc32c 00:05:43.019 CRC-32C seed: 32 00:05:43.019 Transfer size: 4096 bytes 00:05:43.019 Vector count 1 00:05:43.019 Module: software 00:05:43.019 Queue depth: 32 00:05:43.019 Allocate depth: 32 00:05:43.019 # threads/core: 1 00:05:43.019 Run time: 1 seconds 00:05:43.019 Verify: Yes 00:05:43.019 00:05:43.019 Running for 1 seconds... 00:05:43.019 00:05:43.019 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.019 ------------------------------------------------------------------------------------ 00:05:43.019 0,0 529792/s 2069 MiB/s 0 0 00:05:43.019 ==================================================================================== 00:05:43.019 Total 529792/s 2069 MiB/s 0 0' 00:05:43.019 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.019 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.019 00:17:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:43.019 00:17:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:43.019 00:17:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.019 00:17:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.020 00:17:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.020 00:17:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.020 00:17:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.020 00:17:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.020 00:17:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.020 00:17:58 -- accel/accel.sh@42 -- # jq -r . 00:05:43.020 [2024-09-29 00:17:58.644537] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:43.020 [2024-09-29 00:17:58.644629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56095 ] 00:05:43.020 [2024-09-29 00:17:58.780650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.020 [2024-09-29 00:17:58.828840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val=0x1 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val=crc32c 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val=32 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.020 00:17:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:43.020 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.020 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val=software 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val=32 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val=32 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val=1 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val=Yes 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.279 00:17:58 -- accel/accel.sh@21 -- # val= 00:05:43.279 00:17:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.279 00:17:58 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 00:17:59 -- accel/accel.sh@21 -- # val= 00:05:44.215 00:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 00:17:59 -- accel/accel.sh@21 -- # val= 00:05:44.215 00:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 00:17:59 -- accel/accel.sh@21 -- # val= 00:05:44.215 00:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 00:17:59 -- accel/accel.sh@21 -- # val= 00:05:44.215 00:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 00:17:59 -- accel/accel.sh@21 -- # val= 00:05:44.215 00:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 00:17:59 -- accel/accel.sh@21 -- # val= 00:05:44.215 00:17:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # IFS=: 00:05:44.215 00:17:59 -- accel/accel.sh@20 -- # read -r var val 00:05:44.215 ************************************ 00:05:44.215 END TEST accel_crc32c 00:05:44.215 ************************************ 00:05:44.215 00:17:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.215 00:17:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:44.215 00:17:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.215 00:05:44.215 real 0m2.750s 00:05:44.215 user 0m2.412s 00:05:44.215 sys 0m0.136s 00:05:44.215 00:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.215 00:17:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.215 00:18:00 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:44.215 00:18:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:44.215 00:18:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.215 00:18:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.215 ************************************ 00:05:44.215 START TEST accel_crc32c_C2 00:05:44.215 ************************************ 00:05:44.215 00:18:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:44.215 00:18:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.215 00:18:00 -- accel/accel.sh@17 -- # local accel_module 00:05:44.216 00:18:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:44.216 00:18:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:44.216 00:18:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.216 00:18:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.216 00:18:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.216 00:18:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.216 00:18:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.216 00:18:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.216 00:18:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.216 00:18:00 -- accel/accel.sh@42 -- # jq -r . 00:05:44.474 [2024-09-29 00:18:00.077292] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:44.474 [2024-09-29 00:18:00.077402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56135 ] 00:05:44.474 [2024-09-29 00:18:00.213647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.474 [2024-09-29 00:18:00.270423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.853 00:18:01 -- accel/accel.sh@18 -- # out=' 00:05:45.853 SPDK Configuration: 00:05:45.853 Core mask: 0x1 00:05:45.853 00:05:45.853 Accel Perf Configuration: 00:05:45.853 Workload Type: crc32c 00:05:45.853 CRC-32C seed: 0 00:05:45.853 Transfer size: 4096 bytes 00:05:45.853 Vector count 2 00:05:45.853 Module: software 00:05:45.853 Queue depth: 32 00:05:45.853 Allocate depth: 32 00:05:45.853 # threads/core: 1 00:05:45.853 Run time: 1 seconds 00:05:45.853 Verify: Yes 00:05:45.853 00:05:45.853 Running for 1 seconds... 00:05:45.853 00:05:45.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.853 ------------------------------------------------------------------------------------ 00:05:45.853 0,0 395360/s 3088 MiB/s 0 0 00:05:45.853 ==================================================================================== 00:05:45.853 Total 395360/s 1544 MiB/s 0 0' 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:45.853 00:18:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:45.853 00:18:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.853 00:18:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.853 00:18:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.853 00:18:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.853 00:18:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.853 00:18:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.853 00:18:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.853 00:18:01 -- accel/accel.sh@42 -- # jq -r . 00:05:45.853 [2024-09-29 00:18:01.443931] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:45.853 [2024-09-29 00:18:01.444020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56149 ] 00:05:45.853 [2024-09-29 00:18:01.570624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.853 [2024-09-29 00:18:01.620964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val=0x1 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val=crc32c 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val=0 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val=software 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.853 00:18:01 -- accel/accel.sh@21 -- # val=32 00:05:45.853 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.853 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.854 00:18:01 -- accel/accel.sh@21 -- # val=32 00:05:45.854 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.854 00:18:01 -- accel/accel.sh@21 -- # val=1 00:05:45.854 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.854 00:18:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.854 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.854 00:18:01 -- accel/accel.sh@21 -- # val=Yes 00:05:45.854 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.854 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.854 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:45.854 00:18:01 -- accel/accel.sh@21 -- # val= 00:05:45.854 00:18:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # IFS=: 00:05:45.854 00:18:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@21 -- # val= 00:05:47.239 00:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # IFS=: 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@21 -- # val= 00:05:47.239 00:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # IFS=: 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@21 -- # val= 00:05:47.239 00:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # IFS=: 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@21 -- # val= 00:05:47.239 00:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # IFS=: 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@21 -- # val= 00:05:47.239 ************************************ 00:05:47.239 END TEST accel_crc32c_C2 00:05:47.239 ************************************ 00:05:47.239 00:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # IFS=: 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@21 -- # val= 00:05:47.239 00:18:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # IFS=: 00:05:47.239 00:18:02 -- accel/accel.sh@20 -- # read -r var val 00:05:47.239 00:18:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:47.239 00:18:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:47.239 00:18:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.239 00:05:47.239 real 0m2.729s 00:05:47.239 user 0m2.395s 00:05:47.239 sys 0m0.135s 00:05:47.239 00:18:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.239 00:18:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.239 00:18:02 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:47.239 00:18:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:47.239 00:18:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.239 00:18:02 -- common/autotest_common.sh@10 -- # set +x 00:05:47.239 ************************************ 00:05:47.239 START TEST accel_copy 00:05:47.239 ************************************ 00:05:47.239 00:18:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:47.239 00:18:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.239 00:18:02 -- accel/accel.sh@17 -- # local accel_module 00:05:47.239 00:18:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:47.239 00:18:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:47.239 00:18:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.239 00:18:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.239 00:18:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.239 00:18:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.239 00:18:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.239 00:18:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.239 00:18:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.239 00:18:02 -- accel/accel.sh@42 -- # jq -r . 00:05:47.239 [2024-09-29 00:18:02.855839] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:47.239 [2024-09-29 00:18:02.856093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56178 ] 00:05:47.239 [2024-09-29 00:18:02.993417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.239 [2024-09-29 00:18:03.054658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.627 00:18:04 -- accel/accel.sh@18 -- # out=' 00:05:48.627 SPDK Configuration: 00:05:48.627 Core mask: 0x1 00:05:48.627 00:05:48.627 Accel Perf Configuration: 00:05:48.627 Workload Type: copy 00:05:48.627 Transfer size: 4096 bytes 00:05:48.627 Vector count 1 00:05:48.627 Module: software 00:05:48.627 Queue depth: 32 00:05:48.627 Allocate depth: 32 00:05:48.627 # threads/core: 1 00:05:48.627 Run time: 1 seconds 00:05:48.627 Verify: Yes 00:05:48.627 00:05:48.627 Running for 1 seconds... 00:05:48.627 00:05:48.627 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.627 ------------------------------------------------------------------------------------ 00:05:48.627 0,0 353152/s 1379 MiB/s 0 0 00:05:48.627 ==================================================================================== 00:05:48.627 Total 353152/s 1379 MiB/s 0 0' 00:05:48.627 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.627 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.627 00:18:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:48.627 00:18:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:48.627 00:18:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.627 00:18:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.627 00:18:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.627 00:18:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.627 00:18:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.627 00:18:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.627 00:18:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.627 00:18:04 -- accel/accel.sh@42 -- # jq -r . 00:05:48.628 [2024-09-29 00:18:04.228331] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:48.628 [2024-09-29 00:18:04.228440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56203 ] 00:05:48.628 [2024-09-29 00:18:04.354844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.628 [2024-09-29 00:18:04.403074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=0x1 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=copy 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=software 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=32 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=32 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=1 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val=Yes 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:48.628 00:18:04 -- accel/accel.sh@21 -- # val= 00:05:48.628 00:18:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # IFS=: 00:05:48.628 00:18:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 00:18:05 -- accel/accel.sh@21 -- # val= 00:05:50.004 00:18:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # IFS=: 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 00:18:05 -- accel/accel.sh@21 -- # val= 00:05:50.004 00:18:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # IFS=: 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 00:18:05 -- accel/accel.sh@21 -- # val= 00:05:50.004 00:18:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # IFS=: 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 ************************************ 00:05:50.004 END TEST accel_copy 00:05:50.004 ************************************ 00:05:50.004 00:18:05 -- accel/accel.sh@21 -- # val= 00:05:50.004 00:18:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # IFS=: 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 00:18:05 -- accel/accel.sh@21 -- # val= 00:05:50.004 00:18:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # IFS=: 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 00:18:05 -- accel/accel.sh@21 -- # val= 00:05:50.004 00:18:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # IFS=: 00:05:50.004 00:18:05 -- accel/accel.sh@20 -- # read -r var val 00:05:50.004 00:18:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.004 00:18:05 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:50.004 00:18:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.004 00:05:50.004 real 0m2.740s 00:05:50.004 user 0m2.396s 00:05:50.004 sys 0m0.144s 00:05:50.004 00:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.004 00:18:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.004 00:18:05 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.004 00:18:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:50.004 00:18:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.004 00:18:05 -- common/autotest_common.sh@10 -- # set +x 00:05:50.004 ************************************ 00:05:50.004 START TEST accel_fill 00:05:50.004 ************************************ 00:05:50.004 00:18:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.004 00:18:05 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.004 00:18:05 -- accel/accel.sh@17 -- # local accel_module 00:05:50.004 00:18:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.004 00:18:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:50.004 00:18:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.004 00:18:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.004 00:18:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.004 00:18:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.004 00:18:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.004 00:18:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.004 00:18:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.004 00:18:05 -- accel/accel.sh@42 -- # jq -r . 00:05:50.004 [2024-09-29 00:18:05.645133] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:50.005 [2024-09-29 00:18:05.645373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56232 ] 00:05:50.005 [2024-09-29 00:18:05.778845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.005 [2024-09-29 00:18:05.827424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.382 00:18:06 -- accel/accel.sh@18 -- # out=' 00:05:51.382 SPDK Configuration: 00:05:51.382 Core mask: 0x1 00:05:51.382 00:05:51.382 Accel Perf Configuration: 00:05:51.382 Workload Type: fill 00:05:51.382 Fill pattern: 0x80 00:05:51.382 Transfer size: 4096 bytes 00:05:51.382 Vector count 1 00:05:51.382 Module: software 00:05:51.382 Queue depth: 64 00:05:51.382 Allocate depth: 64 00:05:51.382 # threads/core: 1 00:05:51.382 Run time: 1 seconds 00:05:51.382 Verify: Yes 00:05:51.382 00:05:51.382 Running for 1 seconds... 00:05:51.382 00:05:51.382 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.382 ------------------------------------------------------------------------------------ 00:05:51.382 0,0 528832/s 2065 MiB/s 0 0 00:05:51.382 ==================================================================================== 00:05:51.382 Total 528832/s 2065 MiB/s 0 0' 00:05:51.382 00:18:06 -- accel/accel.sh@20 -- # IFS=: 00:05:51.382 00:18:06 -- accel/accel.sh@20 -- # read -r var val 00:05:51.382 00:18:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.382 00:18:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.382 00:18:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.382 00:18:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.382 00:18:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.382 00:18:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.382 00:18:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.382 00:18:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.382 00:18:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.382 00:18:06 -- accel/accel.sh@42 -- # jq -r . 00:05:51.382 [2024-09-29 00:18:07.004266] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:51.383 [2024-09-29 00:18:07.004389] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56246 ] 00:05:51.383 [2024-09-29 00:18:07.140017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.383 [2024-09-29 00:18:07.188012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val=0x1 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val=fill 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val=0x80 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.383 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.383 00:18:07 -- accel/accel.sh@21 -- # val=software 00:05:51.383 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.383 00:18:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val=64 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val=64 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val=1 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val=Yes 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:51.642 00:18:07 -- accel/accel.sh@21 -- # val= 00:05:51.642 00:18:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # IFS=: 00:05:51.642 00:18:07 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@21 -- # val= 00:05:52.579 00:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@21 -- # val= 00:05:52.579 00:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@21 -- # val= 00:05:52.579 00:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@21 -- # val= 00:05:52.579 ************************************ 00:05:52.579 END TEST accel_fill 00:05:52.579 ************************************ 00:05:52.579 00:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@21 -- # val= 00:05:52.579 00:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@21 -- # val= 00:05:52.579 00:18:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # IFS=: 00:05:52.579 00:18:08 -- accel/accel.sh@20 -- # read -r var val 00:05:52.579 00:18:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.579 00:18:08 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:52.579 00:18:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.579 00:05:52.579 real 0m2.733s 00:05:52.579 user 0m2.390s 00:05:52.579 sys 0m0.141s 00:05:52.579 00:18:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.579 00:18:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.579 00:18:08 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:52.579 00:18:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:52.579 00:18:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.579 00:18:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.579 ************************************ 00:05:52.579 START TEST accel_copy_crc32c 00:05:52.579 ************************************ 00:05:52.579 00:18:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:52.579 00:18:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.579 00:18:08 -- accel/accel.sh@17 -- # local accel_module 00:05:52.579 00:18:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:52.579 00:18:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:52.579 00:18:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.579 00:18:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.579 00:18:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.579 00:18:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.579 00:18:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.579 00:18:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.579 00:18:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.579 00:18:08 -- accel/accel.sh@42 -- # jq -r . 00:05:52.838 [2024-09-29 00:18:08.431604] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:52.838 [2024-09-29 00:18:08.431693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56286 ] 00:05:52.838 [2024-09-29 00:18:08.566456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.838 [2024-09-29 00:18:08.615055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.213 00:18:09 -- accel/accel.sh@18 -- # out=' 00:05:54.213 SPDK Configuration: 00:05:54.213 Core mask: 0x1 00:05:54.213 00:05:54.213 Accel Perf Configuration: 00:05:54.213 Workload Type: copy_crc32c 00:05:54.213 CRC-32C seed: 0 00:05:54.213 Vector size: 4096 bytes 00:05:54.213 Transfer size: 4096 bytes 00:05:54.213 Vector count 1 00:05:54.213 Module: software 00:05:54.213 Queue depth: 32 00:05:54.213 Allocate depth: 32 00:05:54.213 # threads/core: 1 00:05:54.213 Run time: 1 seconds 00:05:54.213 Verify: Yes 00:05:54.213 00:05:54.213 Running for 1 seconds... 00:05:54.213 00:05:54.213 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.213 ------------------------------------------------------------------------------------ 00:05:54.213 0,0 285184/s 1114 MiB/s 0 0 00:05:54.213 ==================================================================================== 00:05:54.213 Total 285184/s 1114 MiB/s 0 0' 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.213 00:18:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:54.213 00:18:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:54.213 00:18:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.213 00:18:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.213 00:18:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.213 00:18:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.213 00:18:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.213 00:18:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.213 00:18:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.213 00:18:09 -- accel/accel.sh@42 -- # jq -r . 00:05:54.213 [2024-09-29 00:18:09.791845] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:54.213 [2024-09-29 00:18:09.791930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56300 ] 00:05:54.213 [2024-09-29 00:18:09.918454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.213 [2024-09-29 00:18:09.966641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.213 00:18:09 -- accel/accel.sh@21 -- # val= 00:05:54.213 00:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.213 00:18:09 -- accel/accel.sh@21 -- # val= 00:05:54.213 00:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.213 00:18:09 -- accel/accel.sh@21 -- # val=0x1 00:05:54.213 00:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.213 00:18:09 -- accel/accel.sh@21 -- # val= 00:05:54.213 00:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.213 00:18:09 -- accel/accel.sh@21 -- # val= 00:05:54.213 00:18:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # IFS=: 00:05:54.213 00:18:09 -- accel/accel.sh@20 -- # read -r var val 00:05:54.213 00:18:09 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:54.213 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.213 00:18:10 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val=0 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val= 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val=software 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val=32 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val=32 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val=1 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val=Yes 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val= 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:54.214 00:18:10 -- accel/accel.sh@21 -- # val= 00:05:54.214 00:18:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # IFS=: 00:05:54.214 00:18:10 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@21 -- # val= 00:05:55.588 00:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@21 -- # val= 00:05:55.588 00:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@21 -- # val= 00:05:55.588 00:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@21 -- # val= 00:05:55.588 00:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@21 -- # val= 00:05:55.588 00:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@21 -- # val= 00:05:55.588 00:18:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # IFS=: 00:05:55.588 00:18:11 -- accel/accel.sh@20 -- # read -r var val 00:05:55.588 00:18:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.588 00:18:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:55.588 00:18:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.588 00:05:55.588 real 0m2.715s 00:05:55.588 user 0m2.388s 00:05:55.588 sys 0m0.128s 00:05:55.588 00:18:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.589 ************************************ 00:05:55.589 END TEST accel_copy_crc32c 00:05:55.589 ************************************ 00:05:55.589 00:18:11 -- common/autotest_common.sh@10 -- # set +x 00:05:55.589 00:18:11 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.589 00:18:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:55.589 00:18:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.589 00:18:11 -- common/autotest_common.sh@10 -- # set +x 00:05:55.589 ************************************ 00:05:55.589 START TEST accel_copy_crc32c_C2 00:05:55.589 ************************************ 00:05:55.589 00:18:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.589 00:18:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.589 00:18:11 -- accel/accel.sh@17 -- # local accel_module 00:05:55.589 00:18:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:55.589 00:18:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:55.589 00:18:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.589 00:18:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.589 00:18:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.589 00:18:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.589 00:18:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.589 00:18:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.589 00:18:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.589 00:18:11 -- accel/accel.sh@42 -- # jq -r . 00:05:55.589 [2024-09-29 00:18:11.191975] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:55.589 [2024-09-29 00:18:11.192241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56335 ] 00:05:55.589 [2024-09-29 00:18:11.328926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.589 [2024-09-29 00:18:11.377770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.965 00:18:12 -- accel/accel.sh@18 -- # out=' 00:05:56.965 SPDK Configuration: 00:05:56.965 Core mask: 0x1 00:05:56.965 00:05:56.965 Accel Perf Configuration: 00:05:56.965 Workload Type: copy_crc32c 00:05:56.965 CRC-32C seed: 0 00:05:56.965 Vector size: 4096 bytes 00:05:56.965 Transfer size: 8192 bytes 00:05:56.965 Vector count 2 00:05:56.965 Module: software 00:05:56.965 Queue depth: 32 00:05:56.965 Allocate depth: 32 00:05:56.965 # threads/core: 1 00:05:56.965 Run time: 1 seconds 00:05:56.965 Verify: Yes 00:05:56.965 00:05:56.965 Running for 1 seconds... 00:05:56.965 00:05:56.965 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.965 ------------------------------------------------------------------------------------ 00:05:56.965 0,0 208256/s 1627 MiB/s 0 0 00:05:56.965 ==================================================================================== 00:05:56.965 Total 208256/s 813 MiB/s 0 0' 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:56.965 00:18:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:56.965 00:18:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.965 00:18:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.965 00:18:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.965 00:18:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.965 00:18:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.965 00:18:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.965 00:18:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.965 00:18:12 -- accel/accel.sh@42 -- # jq -r . 00:05:56.965 [2024-09-29 00:18:12.562184] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:56.965 [2024-09-29 00:18:12.562280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56354 ] 00:05:56.965 [2024-09-29 00:18:12.694441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.965 [2024-09-29 00:18:12.742978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=0x1 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=0 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=software 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=32 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=32 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=1 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val=Yes 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:56.965 00:18:12 -- accel/accel.sh@21 -- # val= 00:05:56.965 00:18:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # IFS=: 00:05:56.965 00:18:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@21 -- # val= 00:05:58.345 00:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@21 -- # val= 00:05:58.345 00:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@21 -- # val= 00:05:58.345 00:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@21 -- # val= 00:05:58.345 00:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@21 -- # val= 00:05:58.345 00:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@21 -- # val= 00:05:58.345 00:18:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # IFS=: 00:05:58.345 00:18:13 -- accel/accel.sh@20 -- # read -r var val 00:05:58.345 00:18:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.346 00:18:13 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:58.346 00:18:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.346 00:05:58.346 real 0m2.734s 00:05:58.346 user 0m2.390s 00:05:58.346 sys 0m0.144s 00:05:58.346 00:18:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.346 00:18:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.346 ************************************ 00:05:58.346 END TEST accel_copy_crc32c_C2 00:05:58.346 ************************************ 00:05:58.346 00:18:13 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:58.346 00:18:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:58.346 00:18:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.346 00:18:13 -- common/autotest_common.sh@10 -- # set +x 00:05:58.346 ************************************ 00:05:58.346 START TEST accel_dualcast 00:05:58.346 ************************************ 00:05:58.346 00:18:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:58.346 00:18:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.346 00:18:13 -- accel/accel.sh@17 -- # local accel_module 00:05:58.346 00:18:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:58.346 00:18:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:58.346 00:18:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.346 00:18:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.346 00:18:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.346 00:18:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.346 00:18:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.346 00:18:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.346 00:18:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.346 00:18:13 -- accel/accel.sh@42 -- # jq -r . 00:05:58.346 [2024-09-29 00:18:13.972319] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:58.346 [2024-09-29 00:18:13.972428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56383 ] 00:05:58.346 [2024-09-29 00:18:14.108888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.346 [2024-09-29 00:18:14.157502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.726 00:18:15 -- accel/accel.sh@18 -- # out=' 00:05:59.726 SPDK Configuration: 00:05:59.726 Core mask: 0x1 00:05:59.726 00:05:59.726 Accel Perf Configuration: 00:05:59.726 Workload Type: dualcast 00:05:59.726 Transfer size: 4096 bytes 00:05:59.726 Vector count 1 00:05:59.726 Module: software 00:05:59.726 Queue depth: 32 00:05:59.726 Allocate depth: 32 00:05:59.726 # threads/core: 1 00:05:59.727 Run time: 1 seconds 00:05:59.727 Verify: Yes 00:05:59.727 00:05:59.727 Running for 1 seconds... 00:05:59.727 00:05:59.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.727 ------------------------------------------------------------------------------------ 00:05:59.727 0,0 401600/s 1568 MiB/s 0 0 00:05:59.727 ==================================================================================== 00:05:59.727 Total 401600/s 1568 MiB/s 0 0' 00:05:59.727 00:18:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:59.727 00:18:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.727 00:18:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.727 00:18:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.727 00:18:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.727 00:18:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.727 00:18:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.727 00:18:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.727 00:18:15 -- accel/accel.sh@42 -- # jq -r . 00:05:59.727 [2024-09-29 00:18:15.333287] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:05:59.727 [2024-09-29 00:18:15.333418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56403 ] 00:05:59.727 [2024-09-29 00:18:15.459397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.727 [2024-09-29 00:18:15.510176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=0x1 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=dualcast 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=software 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=32 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=32 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=1 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val=Yes 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:05:59.727 00:18:15 -- accel/accel.sh@21 -- # val= 00:05:59.727 00:18:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # IFS=: 00:05:59.727 00:18:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@21 -- # val= 00:06:01.107 00:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # IFS=: 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@21 -- # val= 00:06:01.107 00:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # IFS=: 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@21 -- # val= 00:06:01.107 00:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # IFS=: 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@21 -- # val= 00:06:01.107 00:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # IFS=: 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@21 -- # val= 00:06:01.107 00:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # IFS=: 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@21 -- # val= 00:06:01.107 00:18:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # IFS=: 00:06:01.107 00:18:16 -- accel/accel.sh@20 -- # read -r var val 00:06:01.107 00:18:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.107 00:18:16 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:01.107 00:18:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.107 00:06:01.107 real 0m2.725s 00:06:01.107 user 0m2.387s 00:06:01.107 sys 0m0.138s 00:06:01.107 00:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.107 00:18:16 -- common/autotest_common.sh@10 -- # set +x 00:06:01.107 ************************************ 00:06:01.107 END TEST accel_dualcast 00:06:01.107 ************************************ 00:06:01.107 00:18:16 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:01.107 00:18:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:01.107 00:18:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.107 00:18:16 -- common/autotest_common.sh@10 -- # set +x 00:06:01.107 ************************************ 00:06:01.107 START TEST accel_compare 00:06:01.107 ************************************ 00:06:01.107 00:18:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:01.107 00:18:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.107 00:18:16 -- accel/accel.sh@17 -- # local accel_module 00:06:01.107 00:18:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:01.107 00:18:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:01.107 00:18:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.107 00:18:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.107 00:18:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.107 00:18:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.107 00:18:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.107 00:18:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.107 00:18:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.107 00:18:16 -- accel/accel.sh@42 -- # jq -r . 00:06:01.107 [2024-09-29 00:18:16.750876] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:01.107 [2024-09-29 00:18:16.750974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56437 ] 00:06:01.107 [2024-09-29 00:18:16.887827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.107 [2024-09-29 00:18:16.936656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.487 00:18:18 -- accel/accel.sh@18 -- # out=' 00:06:02.487 SPDK Configuration: 00:06:02.487 Core mask: 0x1 00:06:02.487 00:06:02.487 Accel Perf Configuration: 00:06:02.487 Workload Type: compare 00:06:02.487 Transfer size: 4096 bytes 00:06:02.487 Vector count 1 00:06:02.487 Module: software 00:06:02.487 Queue depth: 32 00:06:02.487 Allocate depth: 32 00:06:02.487 # threads/core: 1 00:06:02.487 Run time: 1 seconds 00:06:02.487 Verify: Yes 00:06:02.488 00:06:02.488 Running for 1 seconds... 00:06:02.488 00:06:02.488 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.488 ------------------------------------------------------------------------------------ 00:06:02.488 0,0 531744/s 2077 MiB/s 0 0 00:06:02.488 ==================================================================================== 00:06:02.488 Total 531744/s 2077 MiB/s 0 0' 00:06:02.488 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.488 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.488 00:18:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:02.488 00:18:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.488 00:18:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:02.488 00:18:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.488 00:18:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.488 00:18:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.488 00:18:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.488 00:18:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.488 00:18:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.488 00:18:18 -- accel/accel.sh@42 -- # jq -r . 00:06:02.488 [2024-09-29 00:18:18.113824] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:02.488 [2024-09-29 00:18:18.113929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56451 ] 00:06:02.488 [2024-09-29 00:18:18.248392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.488 [2024-09-29 00:18:18.300469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=0x1 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=compare 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=software 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=32 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=32 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=1 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val=Yes 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:02.748 00:18:18 -- accel/accel.sh@21 -- # val= 00:06:02.748 00:18:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # IFS=: 00:06:02.748 00:18:18 -- accel/accel.sh@20 -- # read -r var val 00:06:03.685 00:18:19 -- accel/accel.sh@21 -- # val= 00:06:03.685 00:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.685 00:18:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.685 00:18:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.685 00:18:19 -- accel/accel.sh@21 -- # val= 00:06:03.685 00:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.685 00:18:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.686 00:18:19 -- accel/accel.sh@21 -- # val= 00:06:03.686 00:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.686 00:18:19 -- accel/accel.sh@21 -- # val= 00:06:03.686 00:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.686 00:18:19 -- accel/accel.sh@21 -- # val= 00:06:03.686 00:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.686 ************************************ 00:06:03.686 END TEST accel_compare 00:06:03.686 ************************************ 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.686 00:18:19 -- accel/accel.sh@21 -- # val= 00:06:03.686 00:18:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # IFS=: 00:06:03.686 00:18:19 -- accel/accel.sh@20 -- # read -r var val 00:06:03.686 00:18:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.686 00:18:19 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:03.686 00:18:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.686 00:06:03.686 real 0m2.735s 00:06:03.686 user 0m2.396s 00:06:03.686 sys 0m0.136s 00:06:03.686 00:18:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.686 00:18:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.686 00:18:19 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:03.686 00:18:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:03.686 00:18:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.686 00:18:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.686 ************************************ 00:06:03.686 START TEST accel_xor 00:06:03.686 ************************************ 00:06:03.686 00:18:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:03.686 00:18:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.686 00:18:19 -- accel/accel.sh@17 -- # local accel_module 00:06:03.686 00:18:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:03.686 00:18:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:03.686 00:18:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.686 00:18:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.686 00:18:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.686 00:18:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.686 00:18:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.686 00:18:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.686 00:18:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.686 00:18:19 -- accel/accel.sh@42 -- # jq -r . 00:06:03.686 [2024-09-29 00:18:19.530546] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:03.686 [2024-09-29 00:18:19.530610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56486 ] 00:06:03.946 [2024-09-29 00:18:19.661386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.946 [2024-09-29 00:18:19.710464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.325 00:18:20 -- accel/accel.sh@18 -- # out=' 00:06:05.325 SPDK Configuration: 00:06:05.325 Core mask: 0x1 00:06:05.325 00:06:05.325 Accel Perf Configuration: 00:06:05.325 Workload Type: xor 00:06:05.325 Source buffers: 2 00:06:05.325 Transfer size: 4096 bytes 00:06:05.325 Vector count 1 00:06:05.325 Module: software 00:06:05.325 Queue depth: 32 00:06:05.325 Allocate depth: 32 00:06:05.325 # threads/core: 1 00:06:05.325 Run time: 1 seconds 00:06:05.325 Verify: Yes 00:06:05.325 00:06:05.325 Running for 1 seconds... 00:06:05.325 00:06:05.325 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:05.325 ------------------------------------------------------------------------------------ 00:06:05.325 0,0 273120/s 1066 MiB/s 0 0 00:06:05.325 ==================================================================================== 00:06:05.325 Total 273120/s 1066 MiB/s 0 0' 00:06:05.325 00:18:20 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:20 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:05.325 00:18:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:05.325 00:18:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.325 00:18:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.325 00:18:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.325 00:18:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.325 00:18:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.325 00:18:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.325 00:18:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.325 00:18:20 -- accel/accel.sh@42 -- # jq -r . 00:06:05.325 [2024-09-29 00:18:20.891913] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:05.325 [2024-09-29 00:18:20.892003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56505 ] 00:06:05.325 [2024-09-29 00:18:21.026392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.325 [2024-09-29 00:18:21.074121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=0x1 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=xor 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=2 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=software 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=32 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=32 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=1 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val=Yes 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:05.325 00:18:21 -- accel/accel.sh@21 -- # val= 00:06:05.325 00:18:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # IFS=: 00:06:05.325 00:18:21 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@21 -- # val= 00:06:06.702 00:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@21 -- # val= 00:06:06.702 00:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@21 -- # val= 00:06:06.702 00:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@21 -- # val= 00:06:06.702 00:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@21 -- # val= 00:06:06.702 00:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@21 -- # val= 00:06:06.702 ************************************ 00:06:06.702 END TEST accel_xor 00:06:06.702 ************************************ 00:06:06.702 00:18:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # IFS=: 00:06:06.702 00:18:22 -- accel/accel.sh@20 -- # read -r var val 00:06:06.702 00:18:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.702 00:18:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:06.702 00:18:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.702 00:06:06.702 real 0m2.717s 00:06:06.702 user 0m2.390s 00:06:06.702 sys 0m0.127s 00:06:06.702 00:18:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.702 00:18:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.702 00:18:22 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:06.702 00:18:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:06.702 00:18:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.702 00:18:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.703 ************************************ 00:06:06.703 START TEST accel_xor 00:06:06.703 ************************************ 00:06:06.703 00:18:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:06.703 00:18:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.703 00:18:22 -- accel/accel.sh@17 -- # local accel_module 00:06:06.703 00:18:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:06.703 00:18:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:06.703 00:18:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.703 00:18:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.703 00:18:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.703 00:18:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.703 00:18:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.703 00:18:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.703 00:18:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.703 00:18:22 -- accel/accel.sh@42 -- # jq -r . 00:06:06.703 [2024-09-29 00:18:22.299525] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:06.703 [2024-09-29 00:18:22.299612] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56534 ] 00:06:06.703 [2024-09-29 00:18:22.436807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.703 [2024-09-29 00:18:22.485842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.079 00:18:23 -- accel/accel.sh@18 -- # out=' 00:06:08.079 SPDK Configuration: 00:06:08.079 Core mask: 0x1 00:06:08.079 00:06:08.079 Accel Perf Configuration: 00:06:08.079 Workload Type: xor 00:06:08.079 Source buffers: 3 00:06:08.079 Transfer size: 4096 bytes 00:06:08.079 Vector count 1 00:06:08.079 Module: software 00:06:08.079 Queue depth: 32 00:06:08.079 Allocate depth: 32 00:06:08.079 # threads/core: 1 00:06:08.079 Run time: 1 seconds 00:06:08.079 Verify: Yes 00:06:08.079 00:06:08.079 Running for 1 seconds... 00:06:08.079 00:06:08.079 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.079 ------------------------------------------------------------------------------------ 00:06:08.079 0,0 258912/s 1011 MiB/s 0 0 00:06:08.079 ==================================================================================== 00:06:08.079 Total 258912/s 1011 MiB/s 0 0' 00:06:08.079 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.079 00:18:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:08.079 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.079 00:18:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.079 00:18:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:08.079 00:18:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.079 00:18:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.079 00:18:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.079 00:18:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.079 00:18:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.079 00:18:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.079 00:18:23 -- accel/accel.sh@42 -- # jq -r . 00:06:08.079 [2024-09-29 00:18:23.662647] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:08.079 [2024-09-29 00:18:23.662737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56554 ] 00:06:08.079 [2024-09-29 00:18:23.789888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.079 [2024-09-29 00:18:23.837678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.079 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.079 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.079 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=0x1 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=xor 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=3 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=software 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=32 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=32 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=1 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val=Yes 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:08.080 00:18:23 -- accel/accel.sh@21 -- # val= 00:06:08.080 00:18:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # IFS=: 00:06:08.080 00:18:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@21 -- # val= 00:06:09.460 00:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@21 -- # val= 00:06:09.460 00:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@21 -- # val= 00:06:09.460 00:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@21 -- # val= 00:06:09.460 00:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@21 -- # val= 00:06:09.460 00:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@21 -- # val= 00:06:09.460 00:18:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # IFS=: 00:06:09.460 00:18:24 -- accel/accel.sh@20 -- # read -r var val 00:06:09.460 00:18:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.460 00:18:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:09.460 00:18:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.460 00:06:09.460 real 0m2.718s 00:06:09.460 user 0m2.377s 00:06:09.460 sys 0m0.139s 00:06:09.460 00:18:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.460 00:18:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.460 ************************************ 00:06:09.460 END TEST accel_xor 00:06:09.460 ************************************ 00:06:09.460 00:18:25 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:09.460 00:18:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:09.460 00:18:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.460 00:18:25 -- common/autotest_common.sh@10 -- # set +x 00:06:09.460 ************************************ 00:06:09.460 START TEST accel_dif_verify 00:06:09.460 ************************************ 00:06:09.460 00:18:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:09.460 00:18:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.460 00:18:25 -- accel/accel.sh@17 -- # local accel_module 00:06:09.460 00:18:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:09.460 00:18:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:09.460 00:18:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.460 00:18:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.460 00:18:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.461 00:18:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.461 00:18:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.461 00:18:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.461 00:18:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.461 00:18:25 -- accel/accel.sh@42 -- # jq -r . 00:06:09.461 [2024-09-29 00:18:25.059660] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:09.461 [2024-09-29 00:18:25.060390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56587 ] 00:06:09.461 [2024-09-29 00:18:25.196913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.461 [2024-09-29 00:18:25.244943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.876 00:18:26 -- accel/accel.sh@18 -- # out=' 00:06:10.876 SPDK Configuration: 00:06:10.876 Core mask: 0x1 00:06:10.876 00:06:10.876 Accel Perf Configuration: 00:06:10.876 Workload Type: dif_verify 00:06:10.876 Vector size: 4096 bytes 00:06:10.876 Transfer size: 4096 bytes 00:06:10.876 Block size: 512 bytes 00:06:10.876 Metadata size: 8 bytes 00:06:10.876 Vector count 1 00:06:10.876 Module: software 00:06:10.876 Queue depth: 32 00:06:10.876 Allocate depth: 32 00:06:10.876 # threads/core: 1 00:06:10.876 Run time: 1 seconds 00:06:10.876 Verify: No 00:06:10.876 00:06:10.876 Running for 1 seconds... 00:06:10.876 00:06:10.876 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.876 ------------------------------------------------------------------------------------ 00:06:10.876 0,0 112032/s 444 MiB/s 0 0 00:06:10.876 ==================================================================================== 00:06:10.876 Total 112032/s 437 MiB/s 0 0' 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:10.876 00:18:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.876 00:18:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:10.876 00:18:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.876 00:18:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.876 00:18:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.876 00:18:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.876 00:18:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.876 00:18:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.876 00:18:26 -- accel/accel.sh@42 -- # jq -r . 00:06:10.876 [2024-09-29 00:18:26.438839] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:10.876 [2024-09-29 00:18:26.438946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56602 ] 00:06:10.876 [2024-09-29 00:18:26.577440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.876 [2024-09-29 00:18:26.629288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=0x1 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=dif_verify 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=software 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=32 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=32 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=1 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val=No 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:10.876 00:18:26 -- accel/accel.sh@21 -- # val= 00:06:10.876 00:18:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # IFS=: 00:06:10.876 00:18:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.287 00:18:27 -- accel/accel.sh@21 -- # val= 00:06:12.287 00:18:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.287 00:18:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.287 00:18:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.287 00:18:27 -- accel/accel.sh@21 -- # val= 00:06:12.287 00:18:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.287 00:18:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.287 00:18:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.287 00:18:27 -- accel/accel.sh@21 -- # val= 00:06:12.287 00:18:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.287 00:18:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.288 00:18:27 -- accel/accel.sh@21 -- # val= 00:06:12.288 00:18:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.288 00:18:27 -- accel/accel.sh@21 -- # val= 00:06:12.288 00:18:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.288 00:18:27 -- accel/accel.sh@21 -- # val= 00:06:12.288 00:18:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # IFS=: 00:06:12.288 00:18:27 -- accel/accel.sh@20 -- # read -r var val 00:06:12.288 00:18:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.288 00:18:27 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:12.288 00:18:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.288 00:06:12.288 real 0m2.764s 00:06:12.288 user 0m2.423s 00:06:12.288 sys 0m0.141s 00:06:12.288 00:18:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.288 00:18:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.288 ************************************ 00:06:12.288 END TEST accel_dif_verify 00:06:12.288 ************************************ 00:06:12.288 00:18:27 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:12.288 00:18:27 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:12.288 00:18:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.288 00:18:27 -- common/autotest_common.sh@10 -- # set +x 00:06:12.288 ************************************ 00:06:12.288 START TEST accel_dif_generate 00:06:12.288 ************************************ 00:06:12.288 00:18:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:12.288 00:18:27 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.288 00:18:27 -- accel/accel.sh@17 -- # local accel_module 00:06:12.288 00:18:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:12.288 00:18:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:12.288 00:18:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.288 00:18:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.288 00:18:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.288 00:18:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.288 00:18:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.288 00:18:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.288 00:18:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.288 00:18:27 -- accel/accel.sh@42 -- # jq -r . 00:06:12.288 [2024-09-29 00:18:27.887550] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:12.288 [2024-09-29 00:18:27.888191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56637 ] 00:06:12.288 [2024-09-29 00:18:28.025467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.288 [2024-09-29 00:18:28.078879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.665 00:18:29 -- accel/accel.sh@18 -- # out=' 00:06:13.665 SPDK Configuration: 00:06:13.665 Core mask: 0x1 00:06:13.665 00:06:13.665 Accel Perf Configuration: 00:06:13.665 Workload Type: dif_generate 00:06:13.665 Vector size: 4096 bytes 00:06:13.665 Transfer size: 4096 bytes 00:06:13.665 Block size: 512 bytes 00:06:13.665 Metadata size: 8 bytes 00:06:13.665 Vector count 1 00:06:13.665 Module: software 00:06:13.665 Queue depth: 32 00:06:13.665 Allocate depth: 32 00:06:13.665 # threads/core: 1 00:06:13.665 Run time: 1 seconds 00:06:13.665 Verify: No 00:06:13.665 00:06:13.665 Running for 1 seconds... 00:06:13.665 00:06:13.665 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.665 ------------------------------------------------------------------------------------ 00:06:13.665 0,0 140960/s 559 MiB/s 0 0 00:06:13.665 ==================================================================================== 00:06:13.665 Total 140960/s 550 MiB/s 0 0' 00:06:13.665 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.665 00:18:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:13.665 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.665 00:18:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:13.665 00:18:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.665 00:18:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.665 00:18:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.665 00:18:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.665 00:18:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.665 00:18:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.665 00:18:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.665 00:18:29 -- accel/accel.sh@42 -- # jq -r . 00:06:13.665 [2024-09-29 00:18:29.266831] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:13.665 [2024-09-29 00:18:29.266917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56655 ] 00:06:13.665 [2024-09-29 00:18:29.399038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.665 [2024-09-29 00:18:29.447473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.665 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.665 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.665 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.665 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=0x1 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=dif_generate 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=software 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=32 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=32 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=1 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val=No 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:13.666 00:18:29 -- accel/accel.sh@21 -- # val= 00:06:13.666 00:18:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # IFS=: 00:06:13.666 00:18:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@21 -- # val= 00:06:15.121 00:18:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@21 -- # val= 00:06:15.121 00:18:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@21 -- # val= 00:06:15.121 00:18:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@21 -- # val= 00:06:15.121 00:18:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@21 -- # val= 00:06:15.121 00:18:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@21 -- # val= 00:06:15.121 00:18:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # IFS=: 00:06:15.121 00:18:30 -- accel/accel.sh@20 -- # read -r var val 00:06:15.121 00:18:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.121 00:18:30 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:15.121 00:18:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.121 00:06:15.121 real 0m2.766s 00:06:15.121 user 0m2.415s 00:06:15.121 sys 0m0.150s 00:06:15.121 00:18:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.121 00:18:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.121 ************************************ 00:06:15.121 END TEST accel_dif_generate 00:06:15.121 ************************************ 00:06:15.121 00:18:30 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:15.121 00:18:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:15.121 00:18:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.121 00:18:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.121 ************************************ 00:06:15.121 START TEST accel_dif_generate_copy 00:06:15.121 ************************************ 00:06:15.121 00:18:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:15.121 00:18:30 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.121 00:18:30 -- accel/accel.sh@17 -- # local accel_module 00:06:15.121 00:18:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:15.121 00:18:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:15.121 00:18:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.121 00:18:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.121 00:18:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.121 00:18:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.121 00:18:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.121 00:18:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.121 00:18:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.121 00:18:30 -- accel/accel.sh@42 -- # jq -r . 00:06:15.121 [2024-09-29 00:18:30.707748] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:15.121 [2024-09-29 00:18:30.707865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56692 ] 00:06:15.121 [2024-09-29 00:18:30.835458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.121 [2024-09-29 00:18:30.886658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.498 00:18:32 -- accel/accel.sh@18 -- # out=' 00:06:16.498 SPDK Configuration: 00:06:16.498 Core mask: 0x1 00:06:16.498 00:06:16.498 Accel Perf Configuration: 00:06:16.498 Workload Type: dif_generate_copy 00:06:16.498 Vector size: 4096 bytes 00:06:16.498 Transfer size: 4096 bytes 00:06:16.498 Vector count 1 00:06:16.498 Module: software 00:06:16.498 Queue depth: 32 00:06:16.498 Allocate depth: 32 00:06:16.498 # threads/core: 1 00:06:16.498 Run time: 1 seconds 00:06:16.498 Verify: No 00:06:16.498 00:06:16.498 Running for 1 seconds... 00:06:16.498 00:06:16.498 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.498 ------------------------------------------------------------------------------------ 00:06:16.498 0,0 109120/s 432 MiB/s 0 0 00:06:16.498 ==================================================================================== 00:06:16.498 Total 109120/s 426 MiB/s 0 0' 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.498 00:18:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:16.498 00:18:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:16.498 00:18:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.498 00:18:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.498 00:18:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.498 00:18:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.498 00:18:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.498 00:18:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.498 00:18:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.498 00:18:32 -- accel/accel.sh@42 -- # jq -r . 00:06:16.498 [2024-09-29 00:18:32.079250] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:16.498 [2024-09-29 00:18:32.079423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56706 ] 00:06:16.498 [2024-09-29 00:18:32.214416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.498 [2024-09-29 00:18:32.267696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.498 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.498 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.498 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.498 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.498 00:18:32 -- accel/accel.sh@21 -- # val=0x1 00:06:16.498 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.498 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.498 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.498 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.498 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.498 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.498 00:18:32 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:16.498 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.498 00:18:32 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val=software 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val=32 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val=32 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val=1 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val=No 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:16.499 00:18:32 -- accel/accel.sh@21 -- # val= 00:06:16.499 00:18:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # IFS=: 00:06:16.499 00:18:32 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@21 -- # val= 00:06:17.878 00:18:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # IFS=: 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@21 -- # val= 00:06:17.878 00:18:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # IFS=: 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@21 -- # val= 00:06:17.878 00:18:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # IFS=: 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@21 -- # val= 00:06:17.878 00:18:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # IFS=: 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@21 -- # val= 00:06:17.878 00:18:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # IFS=: 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@21 -- # val= 00:06:17.878 00:18:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # IFS=: 00:06:17.878 00:18:33 -- accel/accel.sh@20 -- # read -r var val 00:06:17.878 00:18:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.878 00:18:33 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:17.878 00:18:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.878 00:06:17.878 real 0m2.749s 00:06:17.878 user 0m2.401s 00:06:17.878 sys 0m0.141s 00:06:17.878 00:18:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.878 ************************************ 00:06:17.878 END TEST accel_dif_generate_copy 00:06:17.878 ************************************ 00:06:17.878 00:18:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.878 00:18:33 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:17.878 00:18:33 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.878 00:18:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:17.878 00:18:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.878 00:18:33 -- common/autotest_common.sh@10 -- # set +x 00:06:17.878 ************************************ 00:06:17.878 START TEST accel_comp 00:06:17.878 ************************************ 00:06:17.878 00:18:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.878 00:18:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.878 00:18:33 -- accel/accel.sh@17 -- # local accel_module 00:06:17.878 00:18:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.878 00:18:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:17.878 00:18:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.878 00:18:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.878 00:18:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.878 00:18:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.878 00:18:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.878 00:18:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.878 00:18:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.878 00:18:33 -- accel/accel.sh@42 -- # jq -r . 00:06:17.878 [2024-09-29 00:18:33.505700] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:17.878 [2024-09-29 00:18:33.505834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56740 ] 00:06:17.878 [2024-09-29 00:18:33.638194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.878 [2024-09-29 00:18:33.690039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.255 00:18:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:19.255 00:06:19.255 SPDK Configuration: 00:06:19.255 Core mask: 0x1 00:06:19.255 00:06:19.255 Accel Perf Configuration: 00:06:19.255 Workload Type: compress 00:06:19.255 Transfer size: 4096 bytes 00:06:19.255 Vector count 1 00:06:19.255 Module: software 00:06:19.255 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.255 Queue depth: 32 00:06:19.255 Allocate depth: 32 00:06:19.255 # threads/core: 1 00:06:19.255 Run time: 1 seconds 00:06:19.255 Verify: No 00:06:19.255 00:06:19.255 Running for 1 seconds... 00:06:19.255 00:06:19.255 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.255 ------------------------------------------------------------------------------------ 00:06:19.255 0,0 53248/s 221 MiB/s 0 0 00:06:19.255 ==================================================================================== 00:06:19.255 Total 53248/s 208 MiB/s 0 0' 00:06:19.255 00:18:34 -- accel/accel.sh@20 -- # IFS=: 00:06:19.255 00:18:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.255 00:18:34 -- accel/accel.sh@20 -- # read -r var val 00:06:19.255 00:18:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.255 00:18:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.255 00:18:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.255 00:18:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.255 00:18:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.255 00:18:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.255 00:18:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.255 00:18:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.255 00:18:34 -- accel/accel.sh@42 -- # jq -r . 00:06:19.255 [2024-09-29 00:18:34.888816] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:19.255 [2024-09-29 00:18:34.889353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56760 ] 00:06:19.255 [2024-09-29 00:18:35.023498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.255 [2024-09-29 00:18:35.078157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.514 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=0x1 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=compress 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=software 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=32 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=32 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=1 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val=No 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:19.515 00:18:35 -- accel/accel.sh@21 -- # val= 00:06:19.515 00:18:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # IFS=: 00:06:19.515 00:18:35 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@21 -- # val= 00:06:20.452 00:18:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # IFS=: 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@21 -- # val= 00:06:20.452 00:18:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # IFS=: 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@21 -- # val= 00:06:20.452 00:18:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # IFS=: 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@21 -- # val= 00:06:20.452 00:18:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # IFS=: 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@21 -- # val= 00:06:20.452 00:18:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # IFS=: 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@21 -- # val= 00:06:20.452 00:18:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # IFS=: 00:06:20.452 00:18:36 -- accel/accel.sh@20 -- # read -r var val 00:06:20.452 00:18:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.452 00:18:36 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:20.452 00:18:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.452 00:06:20.452 real 0m2.777s 00:06:20.452 user 0m2.427s 00:06:20.452 sys 0m0.147s 00:06:20.452 00:18:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.452 00:18:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.452 ************************************ 00:06:20.452 END TEST accel_comp 00:06:20.452 ************************************ 00:06:20.711 00:18:36 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.711 00:18:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:20.711 00:18:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.711 00:18:36 -- common/autotest_common.sh@10 -- # set +x 00:06:20.711 ************************************ 00:06:20.711 START TEST accel_decomp 00:06:20.711 ************************************ 00:06:20.711 00:18:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.711 00:18:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.711 00:18:36 -- accel/accel.sh@17 -- # local accel_module 00:06:20.711 00:18:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.711 00:18:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:20.711 00:18:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.711 00:18:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.711 00:18:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.711 00:18:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.711 00:18:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.711 00:18:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.711 00:18:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.711 00:18:36 -- accel/accel.sh@42 -- # jq -r . 00:06:20.711 [2024-09-29 00:18:36.335740] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:20.711 [2024-09-29 00:18:36.335840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56789 ] 00:06:20.711 [2024-09-29 00:18:36.469756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.711 [2024-09-29 00:18:36.524210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.088 00:18:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:22.088 00:06:22.088 SPDK Configuration: 00:06:22.088 Core mask: 0x1 00:06:22.088 00:06:22.088 Accel Perf Configuration: 00:06:22.088 Workload Type: decompress 00:06:22.088 Transfer size: 4096 bytes 00:06:22.088 Vector count 1 00:06:22.088 Module: software 00:06:22.088 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.088 Queue depth: 32 00:06:22.088 Allocate depth: 32 00:06:22.088 # threads/core: 1 00:06:22.088 Run time: 1 seconds 00:06:22.088 Verify: Yes 00:06:22.088 00:06:22.088 Running for 1 seconds... 00:06:22.088 00:06:22.088 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.088 ------------------------------------------------------------------------------------ 00:06:22.088 0,0 74560/s 137 MiB/s 0 0 00:06:22.088 ==================================================================================== 00:06:22.088 Total 74560/s 291 MiB/s 0 0' 00:06:22.088 00:18:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:22.088 00:18:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.088 00:18:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.088 00:18:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.088 00:18:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.088 00:18:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.088 00:18:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.088 00:18:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.088 00:18:37 -- accel/accel.sh@42 -- # jq -r . 00:06:22.088 [2024-09-29 00:18:37.700584] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:22.088 [2024-09-29 00:18:37.700707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56808 ] 00:06:22.088 [2024-09-29 00:18:37.833241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.088 [2024-09-29 00:18:37.881977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val=0x1 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.088 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.088 00:18:37 -- accel/accel.sh@21 -- # val=decompress 00:06:22.088 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val=software 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val=32 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val=32 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val=1 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val=Yes 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.089 00:18:37 -- accel/accel.sh@21 -- # val= 00:06:22.089 00:18:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.089 00:18:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@21 -- # val= 00:06:23.497 00:18:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # IFS=: 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@21 -- # val= 00:06:23.497 00:18:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # IFS=: 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@21 -- # val= 00:06:23.497 00:18:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # IFS=: 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@21 -- # val= 00:06:23.497 00:18:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # IFS=: 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@21 -- # val= 00:06:23.497 00:18:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # IFS=: 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@21 -- # val= 00:06:23.497 00:18:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # IFS=: 00:06:23.497 00:18:39 -- accel/accel.sh@20 -- # read -r var val 00:06:23.497 00:18:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.497 00:18:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:23.497 00:18:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.497 00:06:23.497 real 0m2.749s 00:06:23.497 user 0m2.420s 00:06:23.497 sys 0m0.129s 00:06:23.497 00:18:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.497 00:18:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.498 ************************************ 00:06:23.498 END TEST accel_decomp 00:06:23.498 ************************************ 00:06:23.498 00:18:39 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.498 00:18:39 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:23.498 00:18:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.498 00:18:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.498 ************************************ 00:06:23.498 START TEST accel_decmop_full 00:06:23.498 ************************************ 00:06:23.498 00:18:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.498 00:18:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.498 00:18:39 -- accel/accel.sh@17 -- # local accel_module 00:06:23.498 00:18:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.498 00:18:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:23.498 00:18:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.498 00:18:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.498 00:18:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.498 00:18:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.498 00:18:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.498 00:18:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.498 00:18:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.498 00:18:39 -- accel/accel.sh@42 -- # jq -r . 00:06:23.498 [2024-09-29 00:18:39.137787] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:23.498 [2024-09-29 00:18:39.137876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56843 ] 00:06:23.498 [2024-09-29 00:18:39.276673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.498 [2024-09-29 00:18:39.328664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.886 00:18:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:24.886 00:06:24.886 SPDK Configuration: 00:06:24.886 Core mask: 0x1 00:06:24.886 00:06:24.886 Accel Perf Configuration: 00:06:24.886 Workload Type: decompress 00:06:24.886 Transfer size: 111250 bytes 00:06:24.886 Vector count 1 00:06:24.886 Module: software 00:06:24.886 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.886 Queue depth: 32 00:06:24.886 Allocate depth: 32 00:06:24.886 # threads/core: 1 00:06:24.886 Run time: 1 seconds 00:06:24.886 Verify: Yes 00:06:24.886 00:06:24.886 Running for 1 seconds... 00:06:24.886 00:06:24.886 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.886 ------------------------------------------------------------------------------------ 00:06:24.886 0,0 4704/s 194 MiB/s 0 0 00:06:24.886 ==================================================================================== 00:06:24.886 Total 4704/s 499 MiB/s 0 0' 00:06:24.886 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:24.886 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:24.886 00:18:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:24.886 00:18:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:24.886 00:18:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.886 00:18:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.886 00:18:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.886 00:18:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.886 00:18:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.886 00:18:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.886 00:18:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.886 00:18:40 -- accel/accel.sh@42 -- # jq -r . 00:06:24.886 [2024-09-29 00:18:40.534229] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:24.886 [2024-09-29 00:18:40.534322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56857 ] 00:06:24.886 [2024-09-29 00:18:40.672683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.886 [2024-09-29 00:18:40.727066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=0x1 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=decompress 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=software 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=32 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=32 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=1 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val=Yes 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.146 00:18:40 -- accel/accel.sh@21 -- # val= 00:06:25.146 00:18:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.146 00:18:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@21 -- # val= 00:06:26.083 00:18:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@21 -- # val= 00:06:26.083 00:18:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@21 -- # val= 00:06:26.083 00:18:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@21 -- # val= 00:06:26.083 00:18:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@21 -- # val= 00:06:26.083 00:18:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@21 -- # val= 00:06:26.083 00:18:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # IFS=: 00:06:26.083 00:18:41 -- accel/accel.sh@20 -- # read -r var val 00:06:26.083 00:18:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.083 00:18:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:26.083 00:18:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.083 00:06:26.083 real 0m2.804s 00:06:26.083 user 0m2.446s 00:06:26.083 sys 0m0.153s 00:06:26.083 00:18:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.083 ************************************ 00:06:26.083 END TEST accel_decmop_full 00:06:26.083 ************************************ 00:06:26.083 00:18:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.342 00:18:41 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.342 00:18:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:26.342 00:18:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.342 00:18:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.342 ************************************ 00:06:26.342 START TEST accel_decomp_mcore 00:06:26.342 ************************************ 00:06:26.342 00:18:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.342 00:18:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.342 00:18:41 -- accel/accel.sh@17 -- # local accel_module 00:06:26.342 00:18:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.343 00:18:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:26.343 00:18:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.343 00:18:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.343 00:18:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.343 00:18:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.343 00:18:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.343 00:18:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.343 00:18:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.343 00:18:41 -- accel/accel.sh@42 -- # jq -r . 00:06:26.343 [2024-09-29 00:18:41.982983] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:26.343 [2024-09-29 00:18:41.983074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56897 ] 00:06:26.343 [2024-09-29 00:18:42.118708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.343 [2024-09-29 00:18:42.174492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.343 [2024-09-29 00:18:42.174632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.343 [2024-09-29 00:18:42.174747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.343 [2024-09-29 00:18:42.174919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.717 00:18:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.717 00:06:27.717 SPDK Configuration: 00:06:27.717 Core mask: 0xf 00:06:27.717 00:06:27.717 Accel Perf Configuration: 00:06:27.717 Workload Type: decompress 00:06:27.717 Transfer size: 4096 bytes 00:06:27.717 Vector count 1 00:06:27.717 Module: software 00:06:27.717 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.717 Queue depth: 32 00:06:27.717 Allocate depth: 32 00:06:27.717 # threads/core: 1 00:06:27.717 Run time: 1 seconds 00:06:27.717 Verify: Yes 00:06:27.717 00:06:27.717 Running for 1 seconds... 00:06:27.717 00:06:27.717 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.717 ------------------------------------------------------------------------------------ 00:06:27.717 0,0 62976/s 116 MiB/s 0 0 00:06:27.717 3,0 61056/s 112 MiB/s 0 0 00:06:27.717 2,0 59168/s 109 MiB/s 0 0 00:06:27.717 1,0 60800/s 112 MiB/s 0 0 00:06:27.717 ==================================================================================== 00:06:27.717 Total 244000/s 953 MiB/s 0 0' 00:06:27.717 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.717 00:18:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.717 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.717 00:18:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:27.717 00:18:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.717 00:18:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.717 00:18:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.717 00:18:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.717 00:18:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.717 00:18:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.717 00:18:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.717 00:18:43 -- accel/accel.sh@42 -- # jq -r . 00:06:27.717 [2024-09-29 00:18:43.362101] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:27.717 [2024-09-29 00:18:43.362664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56914 ] 00:06:27.717 [2024-09-29 00:18:43.498098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.717 [2024-09-29 00:18:43.550892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.717 [2024-09-29 00:18:43.550967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.717 [2024-09-29 00:18:43.551108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.717 [2024-09-29 00:18:43.551111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=0xf 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=decompress 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=software 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=32 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=32 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=1 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val=Yes 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:27.976 00:18:43 -- accel/accel.sh@21 -- # val= 00:06:27.976 00:18:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # IFS=: 00:06:27.976 00:18:43 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@21 -- # val= 00:06:28.913 00:18:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # IFS=: 00:06:28.913 00:18:44 -- accel/accel.sh@20 -- # read -r var val 00:06:28.913 00:18:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.913 00:18:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:28.913 00:18:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.913 00:06:28.913 real 0m2.767s 00:06:28.913 user 0m8.840s 00:06:28.913 sys 0m0.174s 00:06:28.913 00:18:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.913 00:18:44 -- common/autotest_common.sh@10 -- # set +x 00:06:28.913 ************************************ 00:06:28.913 END TEST accel_decomp_mcore 00:06:28.913 ************************************ 00:06:29.172 00:18:44 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.173 00:18:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:29.173 00:18:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.173 00:18:44 -- common/autotest_common.sh@10 -- # set +x 00:06:29.173 ************************************ 00:06:29.173 START TEST accel_decomp_full_mcore 00:06:29.173 ************************************ 00:06:29.173 00:18:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.173 00:18:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.173 00:18:44 -- accel/accel.sh@17 -- # local accel_module 00:06:29.173 00:18:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.173 00:18:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.173 00:18:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.173 00:18:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.173 00:18:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.173 00:18:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.173 00:18:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.173 00:18:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.173 00:18:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.173 00:18:44 -- accel/accel.sh@42 -- # jq -r . 00:06:29.173 [2024-09-29 00:18:44.787293] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:29.173 [2024-09-29 00:18:44.787420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56946 ] 00:06:29.173 [2024-09-29 00:18:44.918262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.173 [2024-09-29 00:18:44.968529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.173 [2024-09-29 00:18:44.968695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.173 [2024-09-29 00:18:44.968828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.173 [2024-09-29 00:18:44.969041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.552 00:18:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:30.552 00:06:30.552 SPDK Configuration: 00:06:30.552 Core mask: 0xf 00:06:30.552 00:06:30.552 Accel Perf Configuration: 00:06:30.552 Workload Type: decompress 00:06:30.552 Transfer size: 111250 bytes 00:06:30.552 Vector count 1 00:06:30.552 Module: software 00:06:30.552 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.552 Queue depth: 32 00:06:30.552 Allocate depth: 32 00:06:30.552 # threads/core: 1 00:06:30.552 Run time: 1 seconds 00:06:30.552 Verify: Yes 00:06:30.552 00:06:30.552 Running for 1 seconds... 00:06:30.552 00:06:30.552 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.552 ------------------------------------------------------------------------------------ 00:06:30.552 0,0 4768/s 196 MiB/s 0 0 00:06:30.552 3,0 4768/s 196 MiB/s 0 0 00:06:30.552 2,0 4768/s 196 MiB/s 0 0 00:06:30.552 1,0 4800/s 198 MiB/s 0 0 00:06:30.552 ==================================================================================== 00:06:30.552 Total 19104/s 2026 MiB/s 0 0' 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:30.552 00:18:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.552 00:18:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.552 00:18:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.552 00:18:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.552 00:18:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.552 00:18:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.552 00:18:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.552 00:18:46 -- accel/accel.sh@42 -- # jq -r . 00:06:30.552 [2024-09-29 00:18:46.164219] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:30.552 [2024-09-29 00:18:46.164321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56974 ] 00:06:30.552 [2024-09-29 00:18:46.296179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.552 [2024-09-29 00:18:46.346777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.552 [2024-09-29 00:18:46.346967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.552 [2024-09-29 00:18:46.347064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.552 [2024-09-29 00:18:46.347067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=0xf 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=decompress 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=software 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=32 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=32 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=1 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val=Yes 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:30.552 00:18:46 -- accel/accel.sh@21 -- # val= 00:06:30.552 00:18:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # IFS=: 00:06:30.552 00:18:46 -- accel/accel.sh@20 -- # read -r var val 00:06:31.932 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.932 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.932 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.932 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.932 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@21 -- # val= 00:06:31.933 00:18:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # IFS=: 00:06:31.933 00:18:47 -- accel/accel.sh@20 -- # read -r var val 00:06:31.933 00:18:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.933 00:18:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:31.933 00:18:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.933 00:06:31.933 real 0m2.760s 00:06:31.933 user 0m8.906s 00:06:31.933 sys 0m0.165s 00:06:31.933 00:18:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.933 00:18:47 -- common/autotest_common.sh@10 -- # set +x 00:06:31.933 ************************************ 00:06:31.933 END TEST accel_decomp_full_mcore 00:06:31.933 ************************************ 00:06:31.933 00:18:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.933 00:18:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:31.933 00:18:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.933 00:18:47 -- common/autotest_common.sh@10 -- # set +x 00:06:31.933 ************************************ 00:06:31.933 START TEST accel_decomp_mthread 00:06:31.933 ************************************ 00:06:31.933 00:18:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.933 00:18:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.933 00:18:47 -- accel/accel.sh@17 -- # local accel_module 00:06:31.933 00:18:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.933 00:18:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.933 00:18:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.933 00:18:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.933 00:18:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.933 00:18:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.933 00:18:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.933 00:18:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.933 00:18:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.933 00:18:47 -- accel/accel.sh@42 -- # jq -r . 00:06:31.933 [2024-09-29 00:18:47.601354] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:31.933 [2024-09-29 00:18:47.602153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57006 ] 00:06:31.933 [2024-09-29 00:18:47.737536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.192 [2024-09-29 00:18:47.787077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.130 00:18:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:33.130 00:06:33.130 SPDK Configuration: 00:06:33.130 Core mask: 0x1 00:06:33.130 00:06:33.130 Accel Perf Configuration: 00:06:33.130 Workload Type: decompress 00:06:33.130 Transfer size: 4096 bytes 00:06:33.130 Vector count 1 00:06:33.130 Module: software 00:06:33.130 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.130 Queue depth: 32 00:06:33.130 Allocate depth: 32 00:06:33.130 # threads/core: 2 00:06:33.130 Run time: 1 seconds 00:06:33.130 Verify: Yes 00:06:33.130 00:06:33.130 Running for 1 seconds... 00:06:33.130 00:06:33.130 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.130 ------------------------------------------------------------------------------------ 00:06:33.130 0,1 40352/s 74 MiB/s 0 0 00:06:33.130 0,0 40224/s 74 MiB/s 0 0 00:06:33.130 ==================================================================================== 00:06:33.130 Total 80576/s 314 MiB/s 0 0' 00:06:33.130 00:18:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.130 00:18:48 -- accel/accel.sh@20 -- # IFS=: 00:06:33.130 00:18:48 -- accel/accel.sh@20 -- # read -r var val 00:06:33.130 00:18:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:33.130 00:18:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.130 00:18:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.130 00:18:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.130 00:18:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.130 00:18:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.130 00:18:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.130 00:18:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.130 00:18:48 -- accel/accel.sh@42 -- # jq -r . 00:06:33.130 [2024-09-29 00:18:48.976925] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:33.130 [2024-09-29 00:18:48.977185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57026 ] 00:06:33.389 [2024-09-29 00:18:49.111600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.389 [2024-09-29 00:18:49.158884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=0x1 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=decompress 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=software 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=32 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=32 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=2 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val=Yes 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:33.389 00:18:49 -- accel/accel.sh@21 -- # val= 00:06:33.389 00:18:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # IFS=: 00:06:33.389 00:18:49 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@21 -- # val= 00:06:34.766 00:18:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # IFS=: 00:06:34.766 00:18:50 -- accel/accel.sh@20 -- # read -r var val 00:06:34.766 00:18:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.766 00:18:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:34.766 00:18:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.766 00:06:34.766 real 0m2.749s 00:06:34.766 user 0m2.412s 00:06:34.766 sys 0m0.134s 00:06:34.766 00:18:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.766 00:18:50 -- common/autotest_common.sh@10 -- # set +x 00:06:34.766 ************************************ 00:06:34.766 END TEST accel_decomp_mthread 00:06:34.766 ************************************ 00:06:34.766 00:18:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.766 00:18:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:34.766 00:18:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.766 00:18:50 -- common/autotest_common.sh@10 -- # set +x 00:06:34.766 ************************************ 00:06:34.766 START TEST accel_deomp_full_mthread 00:06:34.766 ************************************ 00:06:34.766 00:18:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.766 00:18:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.766 00:18:50 -- accel/accel.sh@17 -- # local accel_module 00:06:34.766 00:18:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.766 00:18:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.766 00:18:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.766 00:18:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.766 00:18:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.766 00:18:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.766 00:18:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.766 00:18:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.766 00:18:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.766 00:18:50 -- accel/accel.sh@42 -- # jq -r . 00:06:34.766 [2024-09-29 00:18:50.405126] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:34.766 [2024-09-29 00:18:50.405214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:06:34.766 [2024-09-29 00:18:50.540744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.766 [2024-09-29 00:18:50.588997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.145 00:18:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:36.145 00:06:36.145 SPDK Configuration: 00:06:36.145 Core mask: 0x1 00:06:36.145 00:06:36.145 Accel Perf Configuration: 00:06:36.145 Workload Type: decompress 00:06:36.145 Transfer size: 111250 bytes 00:06:36.145 Vector count 1 00:06:36.145 Module: software 00:06:36.145 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.145 Queue depth: 32 00:06:36.145 Allocate depth: 32 00:06:36.145 # threads/core: 2 00:06:36.145 Run time: 1 seconds 00:06:36.145 Verify: Yes 00:06:36.145 00:06:36.145 Running for 1 seconds... 00:06:36.145 00:06:36.145 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.145 ------------------------------------------------------------------------------------ 00:06:36.145 0,1 2720/s 112 MiB/s 0 0 00:06:36.145 0,0 2688/s 111 MiB/s 0 0 00:06:36.145 ==================================================================================== 00:06:36.145 Total 5408/s 573 MiB/s 0 0' 00:06:36.145 00:18:51 -- accel/accel.sh@20 -- # IFS=: 00:06:36.145 00:18:51 -- accel/accel.sh@20 -- # read -r var val 00:06:36.145 00:18:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.145 00:18:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.145 00:18:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:36.145 00:18:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.145 00:18:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.145 00:18:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.145 00:18:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.145 00:18:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.145 00:18:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.145 00:18:51 -- accel/accel.sh@42 -- # jq -r . 00:06:36.146 [2024-09-29 00:18:51.788664] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:36.146 [2024-09-29 00:18:51.788897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57074 ] 00:06:36.146 [2024-09-29 00:18:51.924903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.146 [2024-09-29 00:18:51.976769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=0x1 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=decompress 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=software 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=32 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=32 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=2 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val=Yes 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:36.405 00:18:52 -- accel/accel.sh@21 -- # val= 00:06:36.405 00:18:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # IFS=: 00:06:36.405 00:18:52 -- accel/accel.sh@20 -- # read -r var val 00:06:37.341 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@21 -- # val= 00:06:37.342 00:18:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # IFS=: 00:06:37.342 00:18:53 -- accel/accel.sh@20 -- # read -r var val 00:06:37.342 00:18:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.342 ************************************ 00:06:37.342 END TEST accel_deomp_full_mthread 00:06:37.342 ************************************ 00:06:37.342 00:18:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:37.342 00:18:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.342 00:06:37.342 real 0m2.790s 00:06:37.342 user 0m2.444s 00:06:37.342 sys 0m0.145s 00:06:37.342 00:18:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.342 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:06:37.601 00:18:53 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:37.601 00:18:53 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.601 00:18:53 -- accel/accel.sh@129 -- # build_accel_config 00:06:37.601 00:18:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:37.601 00:18:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.601 00:18:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.601 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:06:37.601 00:18:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.601 00:18:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.601 00:18:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.601 00:18:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.601 00:18:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.601 00:18:53 -- accel/accel.sh@42 -- # jq -r . 00:06:37.601 ************************************ 00:06:37.601 START TEST accel_dif_functional_tests 00:06:37.601 ************************************ 00:06:37.601 00:18:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:37.601 [2024-09-29 00:18:53.276787] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:37.601 [2024-09-29 00:18:53.277014] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57112 ] 00:06:37.601 [2024-09-29 00:18:53.414235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.863 [2024-09-29 00:18:53.470562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.863 [2024-09-29 00:18:53.470715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.863 [2024-09-29 00:18:53.470717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.863 00:06:37.863 00:06:37.863 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.863 http://cunit.sourceforge.net/ 00:06:37.863 00:06:37.863 00:06:37.863 Suite: accel_dif 00:06:37.863 Test: verify: DIF generated, GUARD check ...passed 00:06:37.863 Test: verify: DIF generated, APPTAG check ...passed 00:06:37.863 Test: verify: DIF generated, REFTAG check ...passed 00:06:37.863 Test: verify: DIF not generated, GUARD check ...passed 00:06:37.863 Test: verify: DIF not generated, APPTAG check ...[2024-09-29 00:18:53.519973] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:37.863 [2024-09-29 00:18:53.520064] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:37.863 [2024-09-29 00:18:53.520103] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:37.863 passed 00:06:37.863 Test: verify: DIF not generated, REFTAG check ...[2024-09-29 00:18:53.520129] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:37.863 [2024-09-29 00:18:53.520167] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:37.863 passed 00:06:37.863 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:37.863 Test: verify: APPTAG incorrect, APPTAG check ...[2024-09-29 00:18:53.520272] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:37.863 [2024-09-29 00:18:53.520378] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:37.863 passed 00:06:37.863 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:37.863 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:37.863 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:37.863 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-09-29 00:18:53.520813] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:37.863 passed 00:06:37.863 Test: generate copy: DIF generated, GUARD check ...passed 00:06:37.863 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:37.863 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:37.863 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:37.863 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:37.863 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:37.863 Test: generate copy: iovecs-len validate ...[2024-09-29 00:18:53.521412] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:37.863 passed 00:06:37.863 Test: generate copy: buffer alignment validate ...passed 00:06:37.863 00:06:37.863 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.863 suites 1 1 n/a 0 0 00:06:37.863 tests 20 20 20 0 0 00:06:37.863 asserts 204 204 204 0 n/a 00:06:37.863 00:06:37.863 Elapsed time = 0.003 seconds 00:06:37.863 ************************************ 00:06:37.863 END TEST accel_dif_functional_tests 00:06:37.863 ************************************ 00:06:37.863 00:06:37.863 real 0m0.452s 00:06:37.863 user 0m0.507s 00:06:37.863 sys 0m0.106s 00:06:37.863 00:18:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.863 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.122 00:06:38.122 real 0m58.987s 00:06:38.122 user 1m4.305s 00:06:38.122 sys 0m4.184s 00:06:38.122 ************************************ 00:06:38.122 END TEST accel 00:06:38.122 ************************************ 00:06:38.122 00:18:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.122 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.122 00:18:53 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:38.122 00:18:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.122 00:18:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.122 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.122 ************************************ 00:06:38.122 START TEST accel_rpc 00:06:38.122 ************************************ 00:06:38.122 00:18:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:38.122 * Looking for test storage... 00:06:38.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:38.122 00:18:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.122 00:18:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57175 00:06:38.122 00:18:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 57175 00:06:38.122 00:18:53 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:38.122 00:18:53 -- common/autotest_common.sh@819 -- # '[' -z 57175 ']' 00:06:38.122 00:18:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.123 00:18:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.123 00:18:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.123 00:18:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.123 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.123 [2024-09-29 00:18:53.904277] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:38.123 [2024-09-29 00:18:53.904629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57175 ] 00:06:38.382 [2024-09-29 00:18:54.038224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.382 [2024-09-29 00:18:54.091095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.382 [2024-09-29 00:18:54.091265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.382 00:18:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.382 00:18:54 -- common/autotest_common.sh@852 -- # return 0 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:38.382 00:18:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.382 00:18:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.382 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.382 ************************************ 00:06:38.382 START TEST accel_assign_opcode 00:06:38.382 ************************************ 00:06:38.382 00:18:54 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:38.382 00:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.382 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.382 [2024-09-29 00:18:54.159660] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:38.382 00:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:38.382 00:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.382 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.382 [2024-09-29 00:18:54.167656] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:38.382 00:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.382 00:18:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:38.382 00:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.382 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 00:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.641 00:18:54 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:38.641 00:18:54 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:38.641 00:18:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.641 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 00:18:54 -- accel/accel_rpc.sh@42 -- # grep software 00:06:38.641 00:18:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.641 software 00:06:38.641 ************************************ 00:06:38.641 END TEST accel_assign_opcode 00:06:38.641 ************************************ 00:06:38.641 00:06:38.641 real 0m0.195s 00:06:38.641 user 0m0.057s 00:06:38.641 sys 0m0.011s 00:06:38.641 00:18:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.641 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.641 00:18:54 -- accel/accel_rpc.sh@55 -- # killprocess 57175 00:06:38.641 00:18:54 -- common/autotest_common.sh@926 -- # '[' -z 57175 ']' 00:06:38.641 00:18:54 -- common/autotest_common.sh@930 -- # kill -0 57175 00:06:38.641 00:18:54 -- common/autotest_common.sh@931 -- # uname 00:06:38.641 00:18:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.641 00:18:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57175 00:06:38.641 killing process with pid 57175 00:06:38.641 00:18:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.641 00:18:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.641 00:18:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57175' 00:06:38.641 00:18:54 -- common/autotest_common.sh@945 -- # kill 57175 00:06:38.641 00:18:54 -- common/autotest_common.sh@950 -- # wait 57175 00:06:38.900 ************************************ 00:06:38.900 END TEST accel_rpc 00:06:38.900 ************************************ 00:06:38.900 00:06:38.900 real 0m0.926s 00:06:38.900 user 0m0.961s 00:06:38.900 sys 0m0.285s 00:06:38.900 00:18:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.900 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.900 00:18:54 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:38.900 00:18:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.900 00:18:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.900 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.900 ************************************ 00:06:38.900 START TEST app_cmdline 00:06:38.900 ************************************ 00:06:38.900 00:18:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.160 * Looking for test storage... 00:06:39.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.160 00:18:54 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.160 00:18:54 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57254 00:06:39.160 00:18:54 -- app/cmdline.sh@18 -- # waitforlisten 57254 00:06:39.160 00:18:54 -- common/autotest_common.sh@819 -- # '[' -z 57254 ']' 00:06:39.160 00:18:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.160 00:18:54 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.160 00:18:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.160 00:18:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.160 00:18:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.160 00:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:39.160 [2024-09-29 00:18:54.877923] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:39.160 [2024-09-29 00:18:54.878884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57254 ] 00:06:39.437 [2024-09-29 00:18:55.017277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.437 [2024-09-29 00:18:55.078689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.437 [2024-09-29 00:18:55.079101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.383 00:18:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.383 00:18:55 -- common/autotest_common.sh@852 -- # return 0 00:06:40.383 00:18:55 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:40.383 { 00:06:40.383 "version": "SPDK v24.01.1-pre git sha1 726a04d70", 00:06:40.383 "fields": { 00:06:40.383 "major": 24, 00:06:40.383 "minor": 1, 00:06:40.383 "patch": 1, 00:06:40.384 "suffix": "-pre", 00:06:40.384 "commit": "726a04d70" 00:06:40.384 } 00:06:40.384 } 00:06:40.384 00:18:56 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.384 00:18:56 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.384 00:18:56 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.384 00:18:56 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.384 00:18:56 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.384 00:18:56 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.384 00:18:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.384 00:18:56 -- app/cmdline.sh@26 -- # sort 00:06:40.384 00:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:40.384 00:18:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.384 00:18:56 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.384 00:18:56 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.384 00:18:56 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.384 00:18:56 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.384 00:18:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.384 00:18:56 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.384 00:18:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.384 00:18:56 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.384 00:18:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.384 00:18:56 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.384 00:18:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.384 00:18:56 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.384 00:18:56 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:40.384 00:18:56 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.642 request: 00:06:40.642 { 00:06:40.642 "method": "env_dpdk_get_mem_stats", 00:06:40.642 "req_id": 1 00:06:40.642 } 00:06:40.642 Got JSON-RPC error response 00:06:40.642 response: 00:06:40.642 { 00:06:40.642 "code": -32601, 00:06:40.642 "message": "Method not found" 00:06:40.642 } 00:06:40.642 00:18:56 -- common/autotest_common.sh@643 -- # es=1 00:06:40.642 00:18:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:40.642 00:18:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:40.643 00:18:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:40.643 00:18:56 -- app/cmdline.sh@1 -- # killprocess 57254 00:06:40.643 00:18:56 -- common/autotest_common.sh@926 -- # '[' -z 57254 ']' 00:06:40.643 00:18:56 -- common/autotest_common.sh@930 -- # kill -0 57254 00:06:40.643 00:18:56 -- common/autotest_common.sh@931 -- # uname 00:06:40.643 00:18:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.643 00:18:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57254 00:06:40.901 killing process with pid 57254 00:06:40.901 00:18:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.901 00:18:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.901 00:18:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57254' 00:06:40.901 00:18:56 -- common/autotest_common.sh@945 -- # kill 57254 00:06:40.901 00:18:56 -- common/autotest_common.sh@950 -- # wait 57254 00:06:41.160 ************************************ 00:06:41.160 END TEST app_cmdline 00:06:41.160 ************************************ 00:06:41.160 00:06:41.160 real 0m2.038s 00:06:41.160 user 0m2.732s 00:06:41.160 sys 0m0.362s 00:06:41.160 00:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.160 00:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.160 00:18:56 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.160 00:18:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.160 00:18:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.160 00:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.160 ************************************ 00:06:41.160 START TEST version 00:06:41.160 ************************************ 00:06:41.161 00:18:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.161 * Looking for test storage... 00:06:41.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:41.161 00:18:56 -- app/version.sh@17 -- # get_header_version major 00:06:41.161 00:18:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.161 00:18:56 -- app/version.sh@14 -- # cut -f2 00:06:41.161 00:18:56 -- app/version.sh@14 -- # tr -d '"' 00:06:41.161 00:18:56 -- app/version.sh@17 -- # major=24 00:06:41.161 00:18:56 -- app/version.sh@18 -- # get_header_version minor 00:06:41.161 00:18:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.161 00:18:56 -- app/version.sh@14 -- # cut -f2 00:06:41.161 00:18:56 -- app/version.sh@14 -- # tr -d '"' 00:06:41.161 00:18:56 -- app/version.sh@18 -- # minor=1 00:06:41.161 00:18:56 -- app/version.sh@19 -- # get_header_version patch 00:06:41.161 00:18:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.161 00:18:56 -- app/version.sh@14 -- # cut -f2 00:06:41.161 00:18:56 -- app/version.sh@14 -- # tr -d '"' 00:06:41.161 00:18:56 -- app/version.sh@19 -- # patch=1 00:06:41.161 00:18:56 -- app/version.sh@20 -- # get_header_version suffix 00:06:41.161 00:18:56 -- app/version.sh@14 -- # cut -f2 00:06:41.161 00:18:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.161 00:18:56 -- app/version.sh@14 -- # tr -d '"' 00:06:41.161 00:18:56 -- app/version.sh@20 -- # suffix=-pre 00:06:41.161 00:18:56 -- app/version.sh@22 -- # version=24.1 00:06:41.161 00:18:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.161 00:18:56 -- app/version.sh@25 -- # version=24.1.1 00:06:41.161 00:18:56 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:41.161 00:18:56 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.161 00:18:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.161 00:18:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:41.161 00:18:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:41.161 00:06:41.161 real 0m0.142s 00:06:41.161 user 0m0.080s 00:06:41.161 sys 0m0.095s 00:06:41.161 ************************************ 00:06:41.161 END TEST version 00:06:41.161 ************************************ 00:06:41.161 00:18:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.161 00:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.419 00:18:57 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:41.419 00:18:57 -- spdk/autotest.sh@204 -- # uname -s 00:06:41.419 00:18:57 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:41.419 00:18:57 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:41.419 00:18:57 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:06:41.419 00:18:57 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:06:41.419 00:18:57 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:41.420 00:18:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.420 00:18:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.420 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 ************************************ 00:06:41.420 START TEST spdk_dd 00:06:41.420 ************************************ 00:06:41.420 00:18:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:41.420 * Looking for test storage... 00:06:41.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:41.420 00:18:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.420 00:18:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.420 00:18:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.420 00:18:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.420 00:18:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.420 00:18:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.420 00:18:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.420 00:18:57 -- paths/export.sh@5 -- # export PATH 00:06:41.420 00:18:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.420 00:18:57 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:41.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:41.678 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:41.678 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:41.678 00:18:57 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:41.678 00:18:57 -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:41.678 00:18:57 -- scripts/common.sh@311 -- # local bdf bdfs 00:06:41.678 00:18:57 -- scripts/common.sh@312 -- # local nvmes 00:06:41.678 00:18:57 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:06:41.678 00:18:57 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:41.678 00:18:57 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:06:41.678 00:18:57 -- scripts/common.sh@297 -- # local bdf= 00:06:41.678 00:18:57 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:06:41.678 00:18:57 -- scripts/common.sh@232 -- # local class 00:06:41.678 00:18:57 -- scripts/common.sh@233 -- # local subclass 00:06:41.678 00:18:57 -- scripts/common.sh@234 -- # local progif 00:06:41.679 00:18:57 -- scripts/common.sh@235 -- # printf %02x 1 00:06:41.679 00:18:57 -- scripts/common.sh@235 -- # class=01 00:06:41.679 00:18:57 -- scripts/common.sh@236 -- # printf %02x 8 00:06:41.679 00:18:57 -- scripts/common.sh@236 -- # subclass=08 00:06:41.679 00:18:57 -- scripts/common.sh@237 -- # printf %02x 2 00:06:41.679 00:18:57 -- scripts/common.sh@237 -- # progif=02 00:06:41.679 00:18:57 -- scripts/common.sh@239 -- # hash lspci 00:06:41.679 00:18:57 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:06:41.679 00:18:57 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:06:41.679 00:18:57 -- scripts/common.sh@242 -- # grep -i -- -p02 00:06:41.679 00:18:57 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:41.679 00:18:57 -- scripts/common.sh@244 -- # tr -d '"' 00:06:41.679 00:18:57 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:41.679 00:18:57 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:06:41.679 00:18:57 -- scripts/common.sh@15 -- # local i 00:06:41.679 00:18:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:06:41.679 00:18:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:41.679 00:18:57 -- scripts/common.sh@24 -- # return 0 00:06:41.679 00:18:57 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:06:41.679 00:18:57 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:41.679 00:18:57 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:06:41.679 00:18:57 -- scripts/common.sh@15 -- # local i 00:06:41.679 00:18:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:06:41.679 00:18:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:41.679 00:18:57 -- scripts/common.sh@24 -- # return 0 00:06:41.679 00:18:57 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:06:41.679 00:18:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:41.679 00:18:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:06:41.937 00:18:57 -- scripts/common.sh@322 -- # uname -s 00:06:41.937 00:18:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:41.937 00:18:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:41.937 00:18:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:41.937 00:18:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:06:41.937 00:18:57 -- scripts/common.sh@322 -- # uname -s 00:06:41.937 00:18:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:41.937 00:18:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:41.937 00:18:57 -- scripts/common.sh@327 -- # (( 2 )) 00:06:41.937 00:18:57 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:41.937 00:18:57 -- dd/dd.sh@13 -- # check_liburing 00:06:41.937 00:18:57 -- dd/common.sh@139 -- # local lib so 00:06:41.937 00:18:57 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:41.937 00:18:57 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:06:41.937 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.937 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:41.938 00:18:57 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:41.938 00:18:57 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:41.938 * spdk_dd linked to liburing 00:06:41.938 00:18:57 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:41.938 00:18:57 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:41.938 00:18:57 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:41.938 00:18:57 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:41.938 00:18:57 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:41.938 00:18:57 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:41.938 00:18:57 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:41.938 00:18:57 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:41.938 00:18:57 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:41.938 00:18:57 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:41.938 00:18:57 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:41.938 00:18:57 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:41.938 00:18:57 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:41.938 00:18:57 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:41.938 00:18:57 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:41.938 00:18:57 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:41.938 00:18:57 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:41.938 00:18:57 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:41.938 00:18:57 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:41.938 00:18:57 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:41.938 00:18:57 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:41.938 00:18:57 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:41.938 00:18:57 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:41.939 00:18:57 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:41.939 00:18:57 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:41.939 00:18:57 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:41.939 00:18:57 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:41.939 00:18:57 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:41.939 00:18:57 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:41.939 00:18:57 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:41.939 00:18:57 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:41.939 00:18:57 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:41.939 00:18:57 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:41.939 00:18:57 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:41.939 00:18:57 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:41.939 00:18:57 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:41.939 00:18:57 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:41.939 00:18:57 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:41.939 00:18:57 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:41.939 00:18:57 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:41.939 00:18:57 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:41.939 00:18:57 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:41.939 00:18:57 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:41.939 00:18:57 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:41.939 00:18:57 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:41.939 00:18:57 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:41.939 00:18:57 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:41.939 00:18:57 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:41.939 00:18:57 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:41.939 00:18:57 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:41.939 00:18:57 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:41.939 00:18:57 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:41.939 00:18:57 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:41.939 00:18:57 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:41.939 00:18:57 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:06:41.939 00:18:57 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:41.939 00:18:57 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:41.939 00:18:57 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:41.939 00:18:57 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:41.939 00:18:57 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:41.939 00:18:57 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:41.939 00:18:57 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:41.939 00:18:57 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:41.939 00:18:57 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:41.939 00:18:57 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:41.939 00:18:57 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:41.939 00:18:57 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:41.939 00:18:57 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:41.939 00:18:57 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:41.939 00:18:57 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:41.939 00:18:57 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:41.939 00:18:57 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:41.939 00:18:57 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:41.939 00:18:57 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:41.939 00:18:57 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:41.939 00:18:57 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:41.939 00:18:57 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:41.939 00:18:57 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:41.939 00:18:57 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:41.939 00:18:57 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:41.939 00:18:57 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:06:41.939 00:18:57 -- dd/common.sh@149 -- # [[ y != y ]] 00:06:41.939 00:18:57 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:41.939 00:18:57 -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:41.939 00:18:57 -- dd/common.sh@156 -- # liburing_in_use=1 00:06:41.939 00:18:57 -- dd/common.sh@157 -- # return 0 00:06:41.939 00:18:57 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:41.939 00:18:57 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:41.939 00:18:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:41.939 00:18:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.939 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:06:41.939 ************************************ 00:06:41.939 START TEST spdk_dd_basic_rw 00:06:41.939 ************************************ 00:06:41.939 00:18:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:41.939 * Looking for test storage... 00:06:41.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:41.939 00:18:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.939 00:18:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.939 00:18:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.939 00:18:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.939 00:18:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.939 00:18:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.939 00:18:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.939 00:18:57 -- paths/export.sh@5 -- # export PATH 00:06:41.939 00:18:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.939 00:18:57 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:41.939 00:18:57 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:41.939 00:18:57 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:41.939 00:18:57 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:06:41.939 00:18:57 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:41.939 00:18:57 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:41.939 00:18:57 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:41.939 00:18:57 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:41.939 00:18:57 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.939 00:18:57 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:06:41.939 00:18:57 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:06:41.939 00:18:57 -- dd/common.sh@126 -- # mapfile -t id 00:06:41.939 00:18:57 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:06:42.199 00:18:57 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2194 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:42.199 00:18:57 -- dd/common.sh@130 -- # lbaf=04 00:06:42.200 00:18:57 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2194 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:42.200 00:18:57 -- dd/common.sh@132 -- # lbaf=4096 00:06:42.200 00:18:57 -- dd/common.sh@134 -- # echo 4096 00:06:42.200 00:18:57 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:42.200 00:18:57 -- dd/basic_rw.sh@96 -- # : 00:06:42.200 00:18:57 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.200 00:18:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.200 00:18:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.200 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.200 00:18:57 -- dd/basic_rw.sh@96 -- # gen_conf 00:06:42.200 00:18:57 -- dd/common.sh@31 -- # xtrace_disable 00:06:42.200 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:06:42.200 ************************************ 00:06:42.200 START TEST dd_bs_lt_native_bs 00:06:42.200 ************************************ 00:06:42.200 00:18:57 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.200 00:18:57 -- common/autotest_common.sh@640 -- # local es=0 00:06:42.200 00:18:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.200 00:18:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.200 00:18:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.200 00:18:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.200 00:18:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.200 00:18:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.200 00:18:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:42.200 00:18:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.200 00:18:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.200 00:18:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:42.200 { 00:06:42.200 "subsystems": [ 00:06:42.200 { 00:06:42.200 "subsystem": "bdev", 00:06:42.200 "config": [ 00:06:42.200 { 00:06:42.200 "params": { 00:06:42.200 "trtype": "pcie", 00:06:42.200 "traddr": "0000:00:06.0", 00:06:42.200 "name": "Nvme0" 00:06:42.200 }, 00:06:42.200 "method": "bdev_nvme_attach_controller" 00:06:42.200 }, 00:06:42.200 { 00:06:42.200 "method": "bdev_wait_for_examine" 00:06:42.200 } 00:06:42.200 ] 00:06:42.200 } 00:06:42.200 ] 00:06:42.200 } 00:06:42.200 [2024-09-29 00:18:57.959274] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:42.200 [2024-09-29 00:18:57.959560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57574 ] 00:06:42.458 [2024-09-29 00:18:58.098994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.458 [2024-09-29 00:18:58.169534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.458 [2024-09-29 00:18:58.291614] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:42.458 [2024-09-29 00:18:58.291687] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.717 [2024-09-29 00:18:58.368000] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:42.717 00:18:58 -- common/autotest_common.sh@643 -- # es=234 00:06:42.717 00:18:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:42.717 00:18:58 -- common/autotest_common.sh@652 -- # es=106 00:06:42.717 00:18:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:42.717 00:18:58 -- common/autotest_common.sh@660 -- # es=1 00:06:42.717 00:18:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:42.717 00:06:42.717 real 0m0.574s 00:06:42.717 user 0m0.411s 00:06:42.717 sys 0m0.114s 00:06:42.717 00:18:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.717 ************************************ 00:06:42.717 END TEST dd_bs_lt_native_bs 00:06:42.717 ************************************ 00:06:42.717 00:18:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.717 00:18:58 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:42.717 00:18:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:42.717 00:18:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.717 00:18:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.717 ************************************ 00:06:42.717 START TEST dd_rw 00:06:42.717 ************************************ 00:06:42.717 00:18:58 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:06:42.717 00:18:58 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:42.717 00:18:58 -- dd/basic_rw.sh@12 -- # local count size 00:06:42.717 00:18:58 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:42.717 00:18:58 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:42.717 00:18:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:42.717 00:18:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:42.717 00:18:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:42.717 00:18:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:42.717 00:18:58 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:42.717 00:18:58 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:42.717 00:18:58 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:42.717 00:18:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:42.717 00:18:58 -- dd/basic_rw.sh@23 -- # count=15 00:06:42.717 00:18:58 -- dd/basic_rw.sh@24 -- # count=15 00:06:42.717 00:18:58 -- dd/basic_rw.sh@25 -- # size=61440 00:06:42.717 00:18:58 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:42.717 00:18:58 -- dd/common.sh@98 -- # xtrace_disable 00:06:42.717 00:18:58 -- common/autotest_common.sh@10 -- # set +x 00:06:43.287 00:18:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:43.287 00:18:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:43.287 00:18:59 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.287 00:18:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.287 [2024-09-29 00:18:59.066190] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.287 [2024-09-29 00:18:59.066272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57605 ] 00:06:43.287 { 00:06:43.287 "subsystems": [ 00:06:43.287 { 00:06:43.287 "subsystem": "bdev", 00:06:43.287 "config": [ 00:06:43.287 { 00:06:43.287 "params": { 00:06:43.287 "trtype": "pcie", 00:06:43.287 "traddr": "0000:00:06.0", 00:06:43.287 "name": "Nvme0" 00:06:43.287 }, 00:06:43.287 "method": "bdev_nvme_attach_controller" 00:06:43.287 }, 00:06:43.287 { 00:06:43.287 "method": "bdev_wait_for_examine" 00:06:43.287 } 00:06:43.287 ] 00:06:43.287 } 00:06:43.287 ] 00:06:43.287 } 00:06:43.548 [2024-09-29 00:18:59.203972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.548 [2024-09-29 00:18:59.251529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.807  Copying: 60/60 [kB] (average 29 MBps) 00:06:43.807 00:06:43.807 00:18:59 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:43.807 00:18:59 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:43.807 00:18:59 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.807 00:18:59 -- common/autotest_common.sh@10 -- # set +x 00:06:43.807 [2024-09-29 00:18:59.613894] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:43.807 [2024-09-29 00:18:59.614231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57618 ] 00:06:43.807 { 00:06:43.807 "subsystems": [ 00:06:43.807 { 00:06:43.807 "subsystem": "bdev", 00:06:43.807 "config": [ 00:06:43.807 { 00:06:43.807 "params": { 00:06:43.807 "trtype": "pcie", 00:06:43.807 "traddr": "0000:00:06.0", 00:06:43.807 "name": "Nvme0" 00:06:43.807 }, 00:06:43.807 "method": "bdev_nvme_attach_controller" 00:06:43.807 }, 00:06:43.807 { 00:06:43.807 "method": "bdev_wait_for_examine" 00:06:43.807 } 00:06:43.807 ] 00:06:43.807 } 00:06:43.807 ] 00:06:43.807 } 00:06:44.067 [2024-09-29 00:18:59.756128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.067 [2024-09-29 00:18:59.807089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.325  Copying: 60/60 [kB] (average 19 MBps) 00:06:44.325 00:06:44.325 00:19:00 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.325 00:19:00 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:44.325 00:19:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.325 00:19:00 -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.325 00:19:00 -- dd/common.sh@12 -- # local size=61440 00:06:44.325 00:19:00 -- dd/common.sh@14 -- # local bs=1048576 00:06:44.325 00:19:00 -- dd/common.sh@15 -- # local count=1 00:06:44.325 00:19:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.325 00:19:00 -- dd/common.sh@18 -- # gen_conf 00:06:44.325 00:19:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.325 00:19:00 -- common/autotest_common.sh@10 -- # set +x 00:06:44.325 [2024-09-29 00:19:00.157625] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:44.325 [2024-09-29 00:19:00.157887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57635 ] 00:06:44.325 { 00:06:44.325 "subsystems": [ 00:06:44.325 { 00:06:44.325 "subsystem": "bdev", 00:06:44.325 "config": [ 00:06:44.325 { 00:06:44.325 "params": { 00:06:44.325 "trtype": "pcie", 00:06:44.325 "traddr": "0000:00:06.0", 00:06:44.325 "name": "Nvme0" 00:06:44.325 }, 00:06:44.325 "method": "bdev_nvme_attach_controller" 00:06:44.325 }, 00:06:44.325 { 00:06:44.325 "method": "bdev_wait_for_examine" 00:06:44.325 } 00:06:44.325 ] 00:06:44.325 } 00:06:44.325 ] 00:06:44.325 } 00:06:44.584 [2024-09-29 00:19:00.296662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.584 [2024-09-29 00:19:00.350098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.843  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:44.843 00:06:44.843 00:19:00 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:44.843 00:19:00 -- dd/basic_rw.sh@23 -- # count=15 00:06:44.843 00:19:00 -- dd/basic_rw.sh@24 -- # count=15 00:06:44.843 00:19:00 -- dd/basic_rw.sh@25 -- # size=61440 00:06:44.843 00:19:00 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:44.843 00:19:00 -- dd/common.sh@98 -- # xtrace_disable 00:06:44.843 00:19:00 -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 00:19:01 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:45.411 00:19:01 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.411 00:19:01 -- dd/common.sh@31 -- # xtrace_disable 00:06:45.411 00:19:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.411 [2024-09-29 00:19:01.169973] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.411 [2024-09-29 00:19:01.170062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57653 ] 00:06:45.411 { 00:06:45.411 "subsystems": [ 00:06:45.411 { 00:06:45.411 "subsystem": "bdev", 00:06:45.411 "config": [ 00:06:45.411 { 00:06:45.411 "params": { 00:06:45.411 "trtype": "pcie", 00:06:45.411 "traddr": "0000:00:06.0", 00:06:45.411 "name": "Nvme0" 00:06:45.411 }, 00:06:45.411 "method": "bdev_nvme_attach_controller" 00:06:45.411 }, 00:06:45.411 { 00:06:45.411 "method": "bdev_wait_for_examine" 00:06:45.411 } 00:06:45.411 ] 00:06:45.411 } 00:06:45.411 ] 00:06:45.411 } 00:06:45.671 [2024-09-29 00:19:01.307691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.671 [2024-09-29 00:19:01.354997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.930  Copying: 60/60 [kB] (average 58 MBps) 00:06:45.930 00:06:45.930 00:19:01 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:45.930 00:19:01 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:45.930 00:19:01 -- dd/common.sh@31 -- # xtrace_disable 00:06:45.930 00:19:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.930 [2024-09-29 00:19:01.695918] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:45.930 [2024-09-29 00:19:01.696010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57666 ] 00:06:45.930 { 00:06:45.930 "subsystems": [ 00:06:45.930 { 00:06:45.930 "subsystem": "bdev", 00:06:45.930 "config": [ 00:06:45.930 { 00:06:45.930 "params": { 00:06:45.930 "trtype": "pcie", 00:06:45.930 "traddr": "0000:00:06.0", 00:06:45.930 "name": "Nvme0" 00:06:45.930 }, 00:06:45.930 "method": "bdev_nvme_attach_controller" 00:06:45.930 }, 00:06:45.930 { 00:06:45.930 "method": "bdev_wait_for_examine" 00:06:45.930 } 00:06:45.930 ] 00:06:45.930 } 00:06:45.930 ] 00:06:45.930 } 00:06:46.189 [2024-09-29 00:19:01.832407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.189 [2024-09-29 00:19:01.888318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.448  Copying: 60/60 [kB] (average 58 MBps) 00:06:46.448 00:06:46.448 00:19:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.448 00:19:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:46.448 00:19:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.448 00:19:02 -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.448 00:19:02 -- dd/common.sh@12 -- # local size=61440 00:06:46.448 00:19:02 -- dd/common.sh@14 -- # local bs=1048576 00:06:46.448 00:19:02 -- dd/common.sh@15 -- # local count=1 00:06:46.448 00:19:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.448 00:19:02 -- dd/common.sh@18 -- # gen_conf 00:06:46.448 00:19:02 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.448 00:19:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.448 [2024-09-29 00:19:02.228260] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:46.448 [2024-09-29 00:19:02.228376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57680 ] 00:06:46.448 { 00:06:46.448 "subsystems": [ 00:06:46.448 { 00:06:46.448 "subsystem": "bdev", 00:06:46.448 "config": [ 00:06:46.448 { 00:06:46.448 "params": { 00:06:46.448 "trtype": "pcie", 00:06:46.448 "traddr": "0000:00:06.0", 00:06:46.448 "name": "Nvme0" 00:06:46.448 }, 00:06:46.448 "method": "bdev_nvme_attach_controller" 00:06:46.448 }, 00:06:46.448 { 00:06:46.448 "method": "bdev_wait_for_examine" 00:06:46.448 } 00:06:46.448 ] 00:06:46.448 } 00:06:46.448 ] 00:06:46.448 } 00:06:46.707 [2024-09-29 00:19:02.364541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.707 [2024-09-29 00:19:02.412506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.966  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:46.967 00:06:46.967 00:19:02 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:46.967 00:19:02 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:46.967 00:19:02 -- dd/basic_rw.sh@23 -- # count=7 00:06:46.967 00:19:02 -- dd/basic_rw.sh@24 -- # count=7 00:06:46.967 00:19:02 -- dd/basic_rw.sh@25 -- # size=57344 00:06:46.967 00:19:02 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:46.967 00:19:02 -- dd/common.sh@98 -- # xtrace_disable 00:06:46.967 00:19:02 -- common/autotest_common.sh@10 -- # set +x 00:06:47.535 00:19:03 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:47.535 00:19:03 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.535 00:19:03 -- dd/common.sh@31 -- # xtrace_disable 00:06:47.535 00:19:03 -- common/autotest_common.sh@10 -- # set +x 00:06:47.535 [2024-09-29 00:19:03.192884] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:47.535 [2024-09-29 00:19:03.193388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57697 ] 00:06:47.535 { 00:06:47.535 "subsystems": [ 00:06:47.535 { 00:06:47.535 "subsystem": "bdev", 00:06:47.535 "config": [ 00:06:47.535 { 00:06:47.535 "params": { 00:06:47.535 "trtype": "pcie", 00:06:47.535 "traddr": "0000:00:06.0", 00:06:47.535 "name": "Nvme0" 00:06:47.535 }, 00:06:47.535 "method": "bdev_nvme_attach_controller" 00:06:47.535 }, 00:06:47.535 { 00:06:47.535 "method": "bdev_wait_for_examine" 00:06:47.535 } 00:06:47.535 ] 00:06:47.535 } 00:06:47.535 ] 00:06:47.535 } 00:06:47.535 [2024-09-29 00:19:03.331479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.794 [2024-09-29 00:19:03.386985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.053  Copying: 56/56 [kB] (average 27 MBps) 00:06:48.053 00:06:48.053 00:19:03 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:48.053 00:19:03 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:48.053 00:19:03 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.053 00:19:03 -- common/autotest_common.sh@10 -- # set +x 00:06:48.053 [2024-09-29 00:19:03.731611] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.053 [2024-09-29 00:19:03.731691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57710 ] 00:06:48.053 { 00:06:48.053 "subsystems": [ 00:06:48.053 { 00:06:48.053 "subsystem": "bdev", 00:06:48.053 "config": [ 00:06:48.053 { 00:06:48.053 "params": { 00:06:48.053 "trtype": "pcie", 00:06:48.053 "traddr": "0000:00:06.0", 00:06:48.053 "name": "Nvme0" 00:06:48.053 }, 00:06:48.053 "method": "bdev_nvme_attach_controller" 00:06:48.053 }, 00:06:48.053 { 00:06:48.053 "method": "bdev_wait_for_examine" 00:06:48.053 } 00:06:48.053 ] 00:06:48.053 } 00:06:48.053 ] 00:06:48.053 } 00:06:48.053 [2024-09-29 00:19:03.867712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.313 [2024-09-29 00:19:03.916919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.571  Copying: 56/56 [kB] (average 54 MBps) 00:06:48.571 00:06:48.571 00:19:04 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.571 00:19:04 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:48.571 00:19:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.571 00:19:04 -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.571 00:19:04 -- dd/common.sh@12 -- # local size=57344 00:06:48.571 00:19:04 -- dd/common.sh@14 -- # local bs=1048576 00:06:48.571 00:19:04 -- dd/common.sh@15 -- # local count=1 00:06:48.571 00:19:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.571 00:19:04 -- dd/common.sh@18 -- # gen_conf 00:06:48.571 00:19:04 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.571 00:19:04 -- common/autotest_common.sh@10 -- # set +x 00:06:48.571 [2024-09-29 00:19:04.263019] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:48.571 [2024-09-29 00:19:04.263115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57724 ] 00:06:48.571 { 00:06:48.571 "subsystems": [ 00:06:48.571 { 00:06:48.571 "subsystem": "bdev", 00:06:48.571 "config": [ 00:06:48.571 { 00:06:48.571 "params": { 00:06:48.571 "trtype": "pcie", 00:06:48.571 "traddr": "0000:00:06.0", 00:06:48.571 "name": "Nvme0" 00:06:48.571 }, 00:06:48.571 "method": "bdev_nvme_attach_controller" 00:06:48.571 }, 00:06:48.571 { 00:06:48.571 "method": "bdev_wait_for_examine" 00:06:48.571 } 00:06:48.571 ] 00:06:48.571 } 00:06:48.571 ] 00:06:48.571 } 00:06:48.571 [2024-09-29 00:19:04.399950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.830 [2024-09-29 00:19:04.449127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.089  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:49.089 00:06:49.089 00:19:04 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.089 00:19:04 -- dd/basic_rw.sh@23 -- # count=7 00:06:49.089 00:19:04 -- dd/basic_rw.sh@24 -- # count=7 00:06:49.089 00:19:04 -- dd/basic_rw.sh@25 -- # size=57344 00:06:49.089 00:19:04 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:49.089 00:19:04 -- dd/common.sh@98 -- # xtrace_disable 00:06:49.089 00:19:04 -- common/autotest_common.sh@10 -- # set +x 00:06:49.348 00:19:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:49.348 00:19:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:49.348 00:19:05 -- dd/common.sh@31 -- # xtrace_disable 00:06:49.348 00:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.608 [2024-09-29 00:19:05.256635] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:49.608 [2024-09-29 00:19:05.256985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57741 ] 00:06:49.608 { 00:06:49.608 "subsystems": [ 00:06:49.608 { 00:06:49.608 "subsystem": "bdev", 00:06:49.608 "config": [ 00:06:49.608 { 00:06:49.608 "params": { 00:06:49.608 "trtype": "pcie", 00:06:49.608 "traddr": "0000:00:06.0", 00:06:49.608 "name": "Nvme0" 00:06:49.608 }, 00:06:49.608 "method": "bdev_nvme_attach_controller" 00:06:49.608 }, 00:06:49.608 { 00:06:49.608 "method": "bdev_wait_for_examine" 00:06:49.608 } 00:06:49.608 ] 00:06:49.608 } 00:06:49.608 ] 00:06:49.608 } 00:06:49.608 [2024-09-29 00:19:05.397514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.608 [2024-09-29 00:19:05.445091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.127  Copying: 56/56 [kB] (average 54 MBps) 00:06:50.127 00:06:50.127 00:19:05 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:50.127 00:19:05 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.127 00:19:05 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.127 00:19:05 -- common/autotest_common.sh@10 -- # set +x 00:06:50.127 [2024-09-29 00:19:05.796502] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.127 [2024-09-29 00:19:05.796594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57754 ] 00:06:50.127 { 00:06:50.127 "subsystems": [ 00:06:50.127 { 00:06:50.127 "subsystem": "bdev", 00:06:50.127 "config": [ 00:06:50.127 { 00:06:50.127 "params": { 00:06:50.127 "trtype": "pcie", 00:06:50.127 "traddr": "0000:00:06.0", 00:06:50.127 "name": "Nvme0" 00:06:50.127 }, 00:06:50.127 "method": "bdev_nvme_attach_controller" 00:06:50.127 }, 00:06:50.127 { 00:06:50.127 "method": "bdev_wait_for_examine" 00:06:50.127 } 00:06:50.127 ] 00:06:50.127 } 00:06:50.127 ] 00:06:50.127 } 00:06:50.127 [2024-09-29 00:19:05.932454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.387 [2024-09-29 00:19:05.981419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.646  Copying: 56/56 [kB] (average 54 MBps) 00:06:50.646 00:06:50.646 00:19:06 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.646 00:19:06 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:50.646 00:19:06 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.646 00:19:06 -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.646 00:19:06 -- dd/common.sh@12 -- # local size=57344 00:06:50.646 00:19:06 -- dd/common.sh@14 -- # local bs=1048576 00:06:50.646 00:19:06 -- dd/common.sh@15 -- # local count=1 00:06:50.646 00:19:06 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.646 00:19:06 -- dd/common.sh@18 -- # gen_conf 00:06:50.646 00:19:06 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.646 00:19:06 -- common/autotest_common.sh@10 -- # set +x 00:06:50.646 [2024-09-29 00:19:06.326598] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:50.646 [2024-09-29 00:19:06.326687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57773 ] 00:06:50.646 { 00:06:50.646 "subsystems": [ 00:06:50.646 { 00:06:50.646 "subsystem": "bdev", 00:06:50.646 "config": [ 00:06:50.646 { 00:06:50.646 "params": { 00:06:50.646 "trtype": "pcie", 00:06:50.646 "traddr": "0000:00:06.0", 00:06:50.646 "name": "Nvme0" 00:06:50.646 }, 00:06:50.646 "method": "bdev_nvme_attach_controller" 00:06:50.646 }, 00:06:50.646 { 00:06:50.646 "method": "bdev_wait_for_examine" 00:06:50.646 } 00:06:50.646 ] 00:06:50.646 } 00:06:50.646 ] 00:06:50.646 } 00:06:50.646 [2024-09-29 00:19:06.453970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.905 [2024-09-29 00:19:06.509928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.164  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:51.165 00:06:51.165 00:19:06 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:51.165 00:19:06 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:51.165 00:19:06 -- dd/basic_rw.sh@23 -- # count=3 00:06:51.165 00:19:06 -- dd/basic_rw.sh@24 -- # count=3 00:06:51.165 00:19:06 -- dd/basic_rw.sh@25 -- # size=49152 00:06:51.165 00:19:06 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:51.165 00:19:06 -- dd/common.sh@98 -- # xtrace_disable 00:06:51.165 00:19:06 -- common/autotest_common.sh@10 -- # set +x 00:06:51.423 00:19:07 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:51.423 00:19:07 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:51.423 00:19:07 -- dd/common.sh@31 -- # xtrace_disable 00:06:51.424 00:19:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.424 [2024-09-29 00:19:07.221515] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.424 [2024-09-29 00:19:07.221606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57785 ] 00:06:51.424 { 00:06:51.424 "subsystems": [ 00:06:51.424 { 00:06:51.424 "subsystem": "bdev", 00:06:51.424 "config": [ 00:06:51.424 { 00:06:51.424 "params": { 00:06:51.424 "trtype": "pcie", 00:06:51.424 "traddr": "0000:00:06.0", 00:06:51.424 "name": "Nvme0" 00:06:51.424 }, 00:06:51.424 "method": "bdev_nvme_attach_controller" 00:06:51.424 }, 00:06:51.424 { 00:06:51.424 "method": "bdev_wait_for_examine" 00:06:51.424 } 00:06:51.424 ] 00:06:51.424 } 00:06:51.424 ] 00:06:51.424 } 00:06:51.683 [2024-09-29 00:19:07.357395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.683 [2024-09-29 00:19:07.405671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.943  Copying: 48/48 [kB] (average 46 MBps) 00:06:51.943 00:06:51.943 00:19:07 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:51.943 00:19:07 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:51.943 00:19:07 -- dd/common.sh@31 -- # xtrace_disable 00:06:51.943 00:19:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.943 [2024-09-29 00:19:07.742687] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:51.943 [2024-09-29 00:19:07.742968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:06:51.943 { 00:06:51.943 "subsystems": [ 00:06:51.943 { 00:06:51.943 "subsystem": "bdev", 00:06:51.943 "config": [ 00:06:51.943 { 00:06:51.943 "params": { 00:06:51.943 "trtype": "pcie", 00:06:51.943 "traddr": "0000:00:06.0", 00:06:51.943 "name": "Nvme0" 00:06:51.943 }, 00:06:51.943 "method": "bdev_nvme_attach_controller" 00:06:51.943 }, 00:06:51.943 { 00:06:51.943 "method": "bdev_wait_for_examine" 00:06:51.943 } 00:06:51.943 ] 00:06:51.943 } 00:06:51.943 ] 00:06:51.943 } 00:06:52.202 [2024-09-29 00:19:07.876171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.202 [2024-09-29 00:19:07.923884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.462  Copying: 48/48 [kB] (average 46 MBps) 00:06:52.462 00:06:52.462 00:19:08 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.462 00:19:08 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:52.462 00:19:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:52.462 00:19:08 -- dd/common.sh@11 -- # local nvme_ref= 00:06:52.462 00:19:08 -- dd/common.sh@12 -- # local size=49152 00:06:52.462 00:19:08 -- dd/common.sh@14 -- # local bs=1048576 00:06:52.462 00:19:08 -- dd/common.sh@15 -- # local count=1 00:06:52.462 00:19:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:52.462 00:19:08 -- dd/common.sh@18 -- # gen_conf 00:06:52.462 00:19:08 -- dd/common.sh@31 -- # xtrace_disable 00:06:52.462 00:19:08 -- common/autotest_common.sh@10 -- # set +x 00:06:52.462 [2024-09-29 00:19:08.276821] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:52.462 [2024-09-29 00:19:08.276916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57811 ] 00:06:52.462 { 00:06:52.462 "subsystems": [ 00:06:52.462 { 00:06:52.462 "subsystem": "bdev", 00:06:52.462 "config": [ 00:06:52.462 { 00:06:52.462 "params": { 00:06:52.462 "trtype": "pcie", 00:06:52.462 "traddr": "0000:00:06.0", 00:06:52.462 "name": "Nvme0" 00:06:52.462 }, 00:06:52.462 "method": "bdev_nvme_attach_controller" 00:06:52.462 }, 00:06:52.462 { 00:06:52.462 "method": "bdev_wait_for_examine" 00:06:52.462 } 00:06:52.462 ] 00:06:52.462 } 00:06:52.462 ] 00:06:52.462 } 00:06:52.721 [2024-09-29 00:19:08.412957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.721 [2024-09-29 00:19:08.460927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.980  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:52.980 00:06:52.980 00:19:08 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:52.980 00:19:08 -- dd/basic_rw.sh@23 -- # count=3 00:06:52.980 00:19:08 -- dd/basic_rw.sh@24 -- # count=3 00:06:52.980 00:19:08 -- dd/basic_rw.sh@25 -- # size=49152 00:06:52.980 00:19:08 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:52.980 00:19:08 -- dd/common.sh@98 -- # xtrace_disable 00:06:52.980 00:19:08 -- common/autotest_common.sh@10 -- # set +x 00:06:53.549 00:19:09 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:53.549 00:19:09 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:53.549 00:19:09 -- dd/common.sh@31 -- # xtrace_disable 00:06:53.549 00:19:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.549 [2024-09-29 00:19:09.178636] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:53.549 [2024-09-29 00:19:09.179483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57824 ] 00:06:53.549 { 00:06:53.549 "subsystems": [ 00:06:53.549 { 00:06:53.549 "subsystem": "bdev", 00:06:53.549 "config": [ 00:06:53.549 { 00:06:53.549 "params": { 00:06:53.549 "trtype": "pcie", 00:06:53.549 "traddr": "0000:00:06.0", 00:06:53.549 "name": "Nvme0" 00:06:53.549 }, 00:06:53.549 "method": "bdev_nvme_attach_controller" 00:06:53.549 }, 00:06:53.549 { 00:06:53.549 "method": "bdev_wait_for_examine" 00:06:53.549 } 00:06:53.549 ] 00:06:53.549 } 00:06:53.549 ] 00:06:53.549 } 00:06:53.549 [2024-09-29 00:19:09.317248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.549 [2024-09-29 00:19:09.372877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.809  Copying: 48/48 [kB] (average 46 MBps) 00:06:53.809 00:06:53.809 00:19:09 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:53.809 00:19:09 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:54.068 00:19:09 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.068 00:19:09 -- common/autotest_common.sh@10 -- # set +x 00:06:54.068 [2024-09-29 00:19:09.707785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.068 [2024-09-29 00:19:09.707872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57842 ] 00:06:54.068 { 00:06:54.068 "subsystems": [ 00:06:54.068 { 00:06:54.068 "subsystem": "bdev", 00:06:54.068 "config": [ 00:06:54.068 { 00:06:54.068 "params": { 00:06:54.068 "trtype": "pcie", 00:06:54.068 "traddr": "0000:00:06.0", 00:06:54.068 "name": "Nvme0" 00:06:54.068 }, 00:06:54.068 "method": "bdev_nvme_attach_controller" 00:06:54.068 }, 00:06:54.068 { 00:06:54.068 "method": "bdev_wait_for_examine" 00:06:54.068 } 00:06:54.068 ] 00:06:54.068 } 00:06:54.068 ] 00:06:54.068 } 00:06:54.068 [2024-09-29 00:19:09.843573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.068 [2024-09-29 00:19:09.897351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.587  Copying: 48/48 [kB] (average 46 MBps) 00:06:54.587 00:06:54.587 00:19:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.587 00:19:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:54.587 00:19:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:54.587 00:19:10 -- dd/common.sh@11 -- # local nvme_ref= 00:06:54.587 00:19:10 -- dd/common.sh@12 -- # local size=49152 00:06:54.587 00:19:10 -- dd/common.sh@14 -- # local bs=1048576 00:06:54.587 00:19:10 -- dd/common.sh@15 -- # local count=1 00:06:54.587 00:19:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:54.587 00:19:10 -- dd/common.sh@18 -- # gen_conf 00:06:54.587 00:19:10 -- dd/common.sh@31 -- # xtrace_disable 00:06:54.587 00:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.587 [2024-09-29 00:19:10.248093] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:54.587 [2024-09-29 00:19:10.248217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57850 ] 00:06:54.587 { 00:06:54.587 "subsystems": [ 00:06:54.587 { 00:06:54.587 "subsystem": "bdev", 00:06:54.587 "config": [ 00:06:54.587 { 00:06:54.587 "params": { 00:06:54.587 "trtype": "pcie", 00:06:54.587 "traddr": "0000:00:06.0", 00:06:54.588 "name": "Nvme0" 00:06:54.588 }, 00:06:54.588 "method": "bdev_nvme_attach_controller" 00:06:54.588 }, 00:06:54.588 { 00:06:54.588 "method": "bdev_wait_for_examine" 00:06:54.588 } 00:06:54.588 ] 00:06:54.588 } 00:06:54.588 ] 00:06:54.588 } 00:06:54.588 [2024-09-29 00:19:10.387778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.847 [2024-09-29 00:19:10.454389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.107  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:55.107 00:06:55.107 00:06:55.107 real 0m12.208s 00:06:55.107 user 0m9.070s 00:06:55.107 sys 0m2.097s 00:06:55.107 00:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.107 ************************************ 00:06:55.107 END TEST dd_rw 00:06:55.107 ************************************ 00:06:55.107 00:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.107 00:19:10 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:55.107 00:19:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.107 00:19:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.107 00:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.107 ************************************ 00:06:55.107 START TEST dd_rw_offset 00:06:55.107 ************************************ 00:06:55.107 00:19:10 -- common/autotest_common.sh@1104 -- # basic_offset 00:06:55.107 00:19:10 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:55.107 00:19:10 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:55.107 00:19:10 -- dd/common.sh@98 -- # xtrace_disable 00:06:55.107 00:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.107 00:19:10 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:55.107 00:19:10 -- dd/basic_rw.sh@56 -- # data=njleac7dv4irg2mikppxt27nu90cz732lup4awa0fpju8n7wetawqk5lxkqt9lh0833urnmpngmr5ly924iiq8bj4k0xywnwfcwol6x6ssw2lfsupoy66x2n2jz61dlupekdj3mndazymrbweqeq5gv1cgijy4nykqapln0hflf29eazdvvs8ih96dcmo1aoylb7vokuj4km78u7th7vwfs22vgqu1hw5zmt823sm9j3upnjzf8yn5v0xbm3ancu7rt9vqrsw5aohwxey138dt8126j3y524c14xslsrpuuqwuf2mym1zlc163csmwswfk7dq11gj3hyk9d6xvqe822hgvzt7wfmni9ubdmhu6cqs5vkaagri1huj88m1rwdlqyhmk677ystm9hyn6exd8djfv49p5cdu73y3crtc26t7puc4s9itq39exzti3ezz7n63ucqy0cz8qtiehaclus02zcvhf9zpebddfydg35tg168kfrz5q4ho5xe2pv5nykev8auwm6drqsisyyp97wkmafh17adpaueisgjinpoblo0yhuoub1qehg431w9uftgvn0h7rr501s26dovcv42wdj6h9dt9036yjihphuy61str7rnp16mw2v9ejhpjro9svfffua23n4rmb8zw3ldvqxf7usch29gka5x1do2p4y2xv2zs9ztml4occtfkth60qrhqx64ibn4e41dy8kjtod67zv7vrbv1isf55ymuey0pj149fr120oivdaqhq9gnpy7k4ws8p6qq5c595kfofuo5ki1i6btsvzj4sxvpojxuw2wwejh8zoxmfxb5hyl59b8siziztzeb73tjxrugzki0340j6hg78zba261a2gywxb4h0zsxz16oebimdnccag8yixby0nd532pu78fynb5rhrgder78u0j2od9k5ouuq141zfpu2ixiugvju19wq76156jlqpmaiajd9slhbrfxfmlm7rmxxz1uic9850teffd4ixy0yd1i0ato71q6ckims8aetiufivgrjrwq9bidtib2wl52dyqgadmyeni0bp4vnqlix5yfdrgrkpmv5q90h9x2zde541b9s2oqg73laljy1c68eekejp1x6dxnhde4c3y75wh9smbfw8cemmqwh9hjcfx4pk1vn6tk72ifuf98hddxt6xif0yfm1icmnowsyt5ex2gumy9kjnz2bwzh59sddg0qgqly1di5h5fq306boo6ir3brzs7mlc0x4quwijlbgpggnawn6ijic0zu1bzzpec4unt8ad2qb8i0ec8y0g94db9zjldh2e9so212nvy5b9xdzznwrfilap4l1pzr3nagxggchduvx9q5hc9n1l7fc33jwj1l8np970jxam2wy1e4ybhtz9g4d4iqxv92m09fxilo672p8nnhwcwjdrr3kgs1z0oxko7nnhfeii43sh297g0o7xgqcvorajxttih9kyg8ebm0k85rku93ihuujnxzqbgd0sl7pi2ajjle6ci7hrqeoj7uhvlynhpe3bhcmxvqvuz6f8x38uc95holvrl05ordezhtbwysfjbtr9n5uff6gz73nilj7p84z5oywuupbmaceglmb2kb5720wk2b5miio9cvhmxd381pxtmokwguv6nhb5f0332hqo26t0ml4i8b0wbrte62s9dzbx4om6yu2r3q4afkoicwi8zljj9qbuujur3pjxeervffm2doc1lplaelc8o10sykylu3mt2v76m3kq3ifnomv2f303ra27nwwxcifdoga6bi8wnxe9bh6zfnqfslq80966nxds2i6t3vnx1b4wcxdcq26n8f9q0oylox137wxh0i4irl0x1pfnr3jqbj9zpuibvrmak58azhu2ot9x6p8haw2k3v3l5x5bub3jcbe8exvvgps61l2ja4s0kxfejqadu0osssqogv8x503eetwzmbdtta4h5we6hoo34skwpuw1vmla8ho7424l5er22iyi7j364cuz1v4i3amrtbjfx0jdvsh9p1djtfhh2a4k4fgfs002o2ul7vs8dx4o6chtkvkhvgukl12giuegpetmci6odqygwd30rg1tzhdfcn42g1dh62fd5yoz3pdabzfem6c8ta1p0yv6l72fpvwl0dar3zeiogzoyxd7sk5rzab5adqiloqdxfc4umr32b55345u8n0hkmanp6h3lfnkdhql75vsxdlquvlsq9hl87aefgrt28m82yq2xyvkb8neiu805jj5qs58uaimqyay9jf4csaqz3trmobflcu0bbzwwok5dn6p9h4mutoydxmg8ttm4p8b2k2ynbeytpybb1ea1rlbam3cswl0bdlwskwbw33sn9cphzb5zqs36teyh23m208ifizqynss4rdphsvt8z6193c4qt09v5611zvvr0zcwulrhtnkl8ult8b1oixexb2rb0yq5fe1th6xc7k5h6yrhxv5yfaqi3z47we58nlg794w6tgoynbb1xkt3ocb83bwjd85leorkx9mqonwfuy6xvix7lp7aqauqoxwnzn6myjm6hicpm8bpewnigmmz1zaaq0dfnjbb1ope7pldftw9ld4q3w5ske1exwavmuei0ytasl9m76dhhsjozkbyyv93kheu8jq0fket5kqdqgqxxi7w10iv02brdcmvqlyyir25dr74heiua2mgbezo8wtiyb3rn7frm1xy3q1c7w9yd9d82qqk06sm8wz96jlltneczsf41u6ez6wj8570kgroewtz9zp4due09msxam7koao5tij5u88t2p4giyf46k2kh32876jzxxm46l59ckfhlzb3vnc42zsg2k9gfthl7eufe03uxn1ebmxm9ubbrbauz3p506vhwmionw04bkpse6nalxx4op1e6nsy2dmzu8l9bbakcy2k7snjrxy4oy8s083k6ir9zbpjxeaxbfz33p1t2hl3bk9p1yjkowhtqn4y77iktvqew5mq4rzoa0owpwrkcl365vexibxdtafvf5egowgyqw27yt9cc59wve0tqlp4melol8x6c12u74ebky4ekg5vwrvndtv8pchjv3lrchgp3lbeidqdnjaaf9zpdvjvrfpy3yjdfmfyupru433vvwuuz5l4wibb3jw6sntkpwta3e79fkuqgwes70n2xlqnuerx4ckia5yj72azh2tvq2comcam236q8eb41gl7zkmroc76ko5eatf7mwjsq4xslcfpk7g1qlbrhivpxuyh2hsxgfvdbypddvdgoyjjnvocok1ct20uzum0wrjurukoaggix9bgjjvv4514ocsn6jzn3vws5o6w4xhqsuuozun146xd15lv4ic6tj664l8qym0u6ld71sug2aug0zmtabovff9vzruflxoe70kw5486xalxo4yj2pr46zy1kcmccyf47jfsoolg96pq0ukfk002t1m5g43qx4mtzr32sxrqjflavrw6djwpb1dokxks9z7l26pm80ztquu4c2qekeg610ihjhu54kv7hq6wz07y8yd8zc3zj02arup77zpuhhcjujcuwgceraxrdb19upax8u8n17m7ikyqohfehd9nxc53nsu6cjjc3pnjajxsypc22ftcxh2kfol5iau75tpzw56a9mni99pfuifnpfzhybwrms4e5aved1cqjiot2x4nikzxh2w58099t5vb3906ugk9fw0bvialjx3u8h0u1a0wdsocafhxlapyvr3tmj96cs5u70htsa3lgkjuifyle9fcf3jjag4y2tudmo4ith35hdwdvjoa9tldja2oyew61ghilpkiccojjdbac5o73vsyrux8obwpoh3kf40f2nc1t9lrcskjf5r6ybadpqdddupy9oi11da5p3mrt11cdtveh7tbvqgh7jh1sfaqrgsjcda0ykj1d4j6fpto778zrkgolirwezste0nwhm4b25cnghl7vownw19nh50yauxjx28l97jw5r6pylq0w3nk2q0u1gg4cqjw18qum8i0wjox47lvzaamfv8zq5jvirykr5l 00:06:55.107 00:19:10 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:55.107 00:19:10 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:55.107 00:19:10 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.107 00:19:10 -- common/autotest_common.sh@10 -- # set +x 00:06:55.107 [2024-09-29 00:19:10.883969] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.107 [2024-09-29 00:19:10.884082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57886 ] 00:06:55.107 { 00:06:55.107 "subsystems": [ 00:06:55.107 { 00:06:55.107 "subsystem": "bdev", 00:06:55.107 "config": [ 00:06:55.107 { 00:06:55.107 "params": { 00:06:55.107 "trtype": "pcie", 00:06:55.107 "traddr": "0000:00:06.0", 00:06:55.107 "name": "Nvme0" 00:06:55.107 }, 00:06:55.107 "method": "bdev_nvme_attach_controller" 00:06:55.107 }, 00:06:55.107 { 00:06:55.107 "method": "bdev_wait_for_examine" 00:06:55.107 } 00:06:55.107 ] 00:06:55.107 } 00:06:55.107 ] 00:06:55.107 } 00:06:55.366 [2024-09-29 00:19:11.020647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.366 [2024-09-29 00:19:11.066428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.626  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:55.626 00:06:55.626 00:19:11 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:55.626 00:19:11 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:55.626 00:19:11 -- dd/common.sh@31 -- # xtrace_disable 00:06:55.626 00:19:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.626 [2024-09-29 00:19:11.401890] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:55.626 [2024-09-29 00:19:11.401992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57898 ] 00:06:55.626 { 00:06:55.626 "subsystems": [ 00:06:55.626 { 00:06:55.626 "subsystem": "bdev", 00:06:55.626 "config": [ 00:06:55.626 { 00:06:55.626 "params": { 00:06:55.626 "trtype": "pcie", 00:06:55.626 "traddr": "0000:00:06.0", 00:06:55.626 "name": "Nvme0" 00:06:55.626 }, 00:06:55.626 "method": "bdev_nvme_attach_controller" 00:06:55.626 }, 00:06:55.626 { 00:06:55.626 "method": "bdev_wait_for_examine" 00:06:55.626 } 00:06:55.626 ] 00:06:55.626 } 00:06:55.627 ] 00:06:55.627 } 00:06:55.886 [2024-09-29 00:19:11.540572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.886 [2024-09-29 00:19:11.590082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.146  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:56.146 00:06:56.146 00:19:11 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:56.147 00:19:11 -- dd/basic_rw.sh@72 -- # [[ njleac7dv4irg2mikppxt27nu90cz732lup4awa0fpju8n7wetawqk5lxkqt9lh0833urnmpngmr5ly924iiq8bj4k0xywnwfcwol6x6ssw2lfsupoy66x2n2jz61dlupekdj3mndazymrbweqeq5gv1cgijy4nykqapln0hflf29eazdvvs8ih96dcmo1aoylb7vokuj4km78u7th7vwfs22vgqu1hw5zmt823sm9j3upnjzf8yn5v0xbm3ancu7rt9vqrsw5aohwxey138dt8126j3y524c14xslsrpuuqwuf2mym1zlc163csmwswfk7dq11gj3hyk9d6xvqe822hgvzt7wfmni9ubdmhu6cqs5vkaagri1huj88m1rwdlqyhmk677ystm9hyn6exd8djfv49p5cdu73y3crtc26t7puc4s9itq39exzti3ezz7n63ucqy0cz8qtiehaclus02zcvhf9zpebddfydg35tg168kfrz5q4ho5xe2pv5nykev8auwm6drqsisyyp97wkmafh17adpaueisgjinpoblo0yhuoub1qehg431w9uftgvn0h7rr501s26dovcv42wdj6h9dt9036yjihphuy61str7rnp16mw2v9ejhpjro9svfffua23n4rmb8zw3ldvqxf7usch29gka5x1do2p4y2xv2zs9ztml4occtfkth60qrhqx64ibn4e41dy8kjtod67zv7vrbv1isf55ymuey0pj149fr120oivdaqhq9gnpy7k4ws8p6qq5c595kfofuo5ki1i6btsvzj4sxvpojxuw2wwejh8zoxmfxb5hyl59b8siziztzeb73tjxrugzki0340j6hg78zba261a2gywxb4h0zsxz16oebimdnccag8yixby0nd532pu78fynb5rhrgder78u0j2od9k5ouuq141zfpu2ixiugvju19wq76156jlqpmaiajd9slhbrfxfmlm7rmxxz1uic9850teffd4ixy0yd1i0ato71q6ckims8aetiufivgrjrwq9bidtib2wl52dyqgadmyeni0bp4vnqlix5yfdrgrkpmv5q90h9x2zde541b9s2oqg73laljy1c68eekejp1x6dxnhde4c3y75wh9smbfw8cemmqwh9hjcfx4pk1vn6tk72ifuf98hddxt6xif0yfm1icmnowsyt5ex2gumy9kjnz2bwzh59sddg0qgqly1di5h5fq306boo6ir3brzs7mlc0x4quwijlbgpggnawn6ijic0zu1bzzpec4unt8ad2qb8i0ec8y0g94db9zjldh2e9so212nvy5b9xdzznwrfilap4l1pzr3nagxggchduvx9q5hc9n1l7fc33jwj1l8np970jxam2wy1e4ybhtz9g4d4iqxv92m09fxilo672p8nnhwcwjdrr3kgs1z0oxko7nnhfeii43sh297g0o7xgqcvorajxttih9kyg8ebm0k85rku93ihuujnxzqbgd0sl7pi2ajjle6ci7hrqeoj7uhvlynhpe3bhcmxvqvuz6f8x38uc95holvrl05ordezhtbwysfjbtr9n5uff6gz73nilj7p84z5oywuupbmaceglmb2kb5720wk2b5miio9cvhmxd381pxtmokwguv6nhb5f0332hqo26t0ml4i8b0wbrte62s9dzbx4om6yu2r3q4afkoicwi8zljj9qbuujur3pjxeervffm2doc1lplaelc8o10sykylu3mt2v76m3kq3ifnomv2f303ra27nwwxcifdoga6bi8wnxe9bh6zfnqfslq80966nxds2i6t3vnx1b4wcxdcq26n8f9q0oylox137wxh0i4irl0x1pfnr3jqbj9zpuibvrmak58azhu2ot9x6p8haw2k3v3l5x5bub3jcbe8exvvgps61l2ja4s0kxfejqadu0osssqogv8x503eetwzmbdtta4h5we6hoo34skwpuw1vmla8ho7424l5er22iyi7j364cuz1v4i3amrtbjfx0jdvsh9p1djtfhh2a4k4fgfs002o2ul7vs8dx4o6chtkvkhvgukl12giuegpetmci6odqygwd30rg1tzhdfcn42g1dh62fd5yoz3pdabzfem6c8ta1p0yv6l72fpvwl0dar3zeiogzoyxd7sk5rzab5adqiloqdxfc4umr32b55345u8n0hkmanp6h3lfnkdhql75vsxdlquvlsq9hl87aefgrt28m82yq2xyvkb8neiu805jj5qs58uaimqyay9jf4csaqz3trmobflcu0bbzwwok5dn6p9h4mutoydxmg8ttm4p8b2k2ynbeytpybb1ea1rlbam3cswl0bdlwskwbw33sn9cphzb5zqs36teyh23m208ifizqynss4rdphsvt8z6193c4qt09v5611zvvr0zcwulrhtnkl8ult8b1oixexb2rb0yq5fe1th6xc7k5h6yrhxv5yfaqi3z47we58nlg794w6tgoynbb1xkt3ocb83bwjd85leorkx9mqonwfuy6xvix7lp7aqauqoxwnzn6myjm6hicpm8bpewnigmmz1zaaq0dfnjbb1ope7pldftw9ld4q3w5ske1exwavmuei0ytasl9m76dhhsjozkbyyv93kheu8jq0fket5kqdqgqxxi7w10iv02brdcmvqlyyir25dr74heiua2mgbezo8wtiyb3rn7frm1xy3q1c7w9yd9d82qqk06sm8wz96jlltneczsf41u6ez6wj8570kgroewtz9zp4due09msxam7koao5tij5u88t2p4giyf46k2kh32876jzxxm46l59ckfhlzb3vnc42zsg2k9gfthl7eufe03uxn1ebmxm9ubbrbauz3p506vhwmionw04bkpse6nalxx4op1e6nsy2dmzu8l9bbakcy2k7snjrxy4oy8s083k6ir9zbpjxeaxbfz33p1t2hl3bk9p1yjkowhtqn4y77iktvqew5mq4rzoa0owpwrkcl365vexibxdtafvf5egowgyqw27yt9cc59wve0tqlp4melol8x6c12u74ebky4ekg5vwrvndtv8pchjv3lrchgp3lbeidqdnjaaf9zpdvjvrfpy3yjdfmfyupru433vvwuuz5l4wibb3jw6sntkpwta3e79fkuqgwes70n2xlqnuerx4ckia5yj72azh2tvq2comcam236q8eb41gl7zkmroc76ko5eatf7mwjsq4xslcfpk7g1qlbrhivpxuyh2hsxgfvdbypddvdgoyjjnvocok1ct20uzum0wrjurukoaggix9bgjjvv4514ocsn6jzn3vws5o6w4xhqsuuozun146xd15lv4ic6tj664l8qym0u6ld71sug2aug0zmtabovff9vzruflxoe70kw5486xalxo4yj2pr46zy1kcmccyf47jfsoolg96pq0ukfk002t1m5g43qx4mtzr32sxrqjflavrw6djwpb1dokxks9z7l26pm80ztquu4c2qekeg610ihjhu54kv7hq6wz07y8yd8zc3zj02arup77zpuhhcjujcuwgceraxrdb19upax8u8n17m7ikyqohfehd9nxc53nsu6cjjc3pnjajxsypc22ftcxh2kfol5iau75tpzw56a9mni99pfuifnpfzhybwrms4e5aved1cqjiot2x4nikzxh2w58099t5vb3906ugk9fw0bvialjx3u8h0u1a0wdsocafhxlapyvr3tmj96cs5u70htsa3lgkjuifyle9fcf3jjag4y2tudmo4ith35hdwdvjoa9tldja2oyew61ghilpkiccojjdbac5o73vsyrux8obwpoh3kf40f2nc1t9lrcskjf5r6ybadpqdddupy9oi11da5p3mrt11cdtveh7tbvqgh7jh1sfaqrgsjcda0ykj1d4j6fpto778zrkgolirwezste0nwhm4b25cnghl7vownw19nh50yauxjx28l97jw5r6pylq0w3nk2q0u1gg4cqjw18qum8i0wjox47lvzaamfv8zq5jvirykr5l == \n\j\l\e\a\c\7\d\v\4\i\r\g\2\m\i\k\p\p\x\t\2\7\n\u\9\0\c\z\7\3\2\l\u\p\4\a\w\a\0\f\p\j\u\8\n\7\w\e\t\a\w\q\k\5\l\x\k\q\t\9\l\h\0\8\3\3\u\r\n\m\p\n\g\m\r\5\l\y\9\2\4\i\i\q\8\b\j\4\k\0\x\y\w\n\w\f\c\w\o\l\6\x\6\s\s\w\2\l\f\s\u\p\o\y\6\6\x\2\n\2\j\z\6\1\d\l\u\p\e\k\d\j\3\m\n\d\a\z\y\m\r\b\w\e\q\e\q\5\g\v\1\c\g\i\j\y\4\n\y\k\q\a\p\l\n\0\h\f\l\f\2\9\e\a\z\d\v\v\s\8\i\h\9\6\d\c\m\o\1\a\o\y\l\b\7\v\o\k\u\j\4\k\m\7\8\u\7\t\h\7\v\w\f\s\2\2\v\g\q\u\1\h\w\5\z\m\t\8\2\3\s\m\9\j\3\u\p\n\j\z\f\8\y\n\5\v\0\x\b\m\3\a\n\c\u\7\r\t\9\v\q\r\s\w\5\a\o\h\w\x\e\y\1\3\8\d\t\8\1\2\6\j\3\y\5\2\4\c\1\4\x\s\l\s\r\p\u\u\q\w\u\f\2\m\y\m\1\z\l\c\1\6\3\c\s\m\w\s\w\f\k\7\d\q\1\1\g\j\3\h\y\k\9\d\6\x\v\q\e\8\2\2\h\g\v\z\t\7\w\f\m\n\i\9\u\b\d\m\h\u\6\c\q\s\5\v\k\a\a\g\r\i\1\h\u\j\8\8\m\1\r\w\d\l\q\y\h\m\k\6\7\7\y\s\t\m\9\h\y\n\6\e\x\d\8\d\j\f\v\4\9\p\5\c\d\u\7\3\y\3\c\r\t\c\2\6\t\7\p\u\c\4\s\9\i\t\q\3\9\e\x\z\t\i\3\e\z\z\7\n\6\3\u\c\q\y\0\c\z\8\q\t\i\e\h\a\c\l\u\s\0\2\z\c\v\h\f\9\z\p\e\b\d\d\f\y\d\g\3\5\t\g\1\6\8\k\f\r\z\5\q\4\h\o\5\x\e\2\p\v\5\n\y\k\e\v\8\a\u\w\m\6\d\r\q\s\i\s\y\y\p\9\7\w\k\m\a\f\h\1\7\a\d\p\a\u\e\i\s\g\j\i\n\p\o\b\l\o\0\y\h\u\o\u\b\1\q\e\h\g\4\3\1\w\9\u\f\t\g\v\n\0\h\7\r\r\5\0\1\s\2\6\d\o\v\c\v\4\2\w\d\j\6\h\9\d\t\9\0\3\6\y\j\i\h\p\h\u\y\6\1\s\t\r\7\r\n\p\1\6\m\w\2\v\9\e\j\h\p\j\r\o\9\s\v\f\f\f\u\a\2\3\n\4\r\m\b\8\z\w\3\l\d\v\q\x\f\7\u\s\c\h\2\9\g\k\a\5\x\1\d\o\2\p\4\y\2\x\v\2\z\s\9\z\t\m\l\4\o\c\c\t\f\k\t\h\6\0\q\r\h\q\x\6\4\i\b\n\4\e\4\1\d\y\8\k\j\t\o\d\6\7\z\v\7\v\r\b\v\1\i\s\f\5\5\y\m\u\e\y\0\p\j\1\4\9\f\r\1\2\0\o\i\v\d\a\q\h\q\9\g\n\p\y\7\k\4\w\s\8\p\6\q\q\5\c\5\9\5\k\f\o\f\u\o\5\k\i\1\i\6\b\t\s\v\z\j\4\s\x\v\p\o\j\x\u\w\2\w\w\e\j\h\8\z\o\x\m\f\x\b\5\h\y\l\5\9\b\8\s\i\z\i\z\t\z\e\b\7\3\t\j\x\r\u\g\z\k\i\0\3\4\0\j\6\h\g\7\8\z\b\a\2\6\1\a\2\g\y\w\x\b\4\h\0\z\s\x\z\1\6\o\e\b\i\m\d\n\c\c\a\g\8\y\i\x\b\y\0\n\d\5\3\2\p\u\7\8\f\y\n\b\5\r\h\r\g\d\e\r\7\8\u\0\j\2\o\d\9\k\5\o\u\u\q\1\4\1\z\f\p\u\2\i\x\i\u\g\v\j\u\1\9\w\q\7\6\1\5\6\j\l\q\p\m\a\i\a\j\d\9\s\l\h\b\r\f\x\f\m\l\m\7\r\m\x\x\z\1\u\i\c\9\8\5\0\t\e\f\f\d\4\i\x\y\0\y\d\1\i\0\a\t\o\7\1\q\6\c\k\i\m\s\8\a\e\t\i\u\f\i\v\g\r\j\r\w\q\9\b\i\d\t\i\b\2\w\l\5\2\d\y\q\g\a\d\m\y\e\n\i\0\b\p\4\v\n\q\l\i\x\5\y\f\d\r\g\r\k\p\m\v\5\q\9\0\h\9\x\2\z\d\e\5\4\1\b\9\s\2\o\q\g\7\3\l\a\l\j\y\1\c\6\8\e\e\k\e\j\p\1\x\6\d\x\n\h\d\e\4\c\3\y\7\5\w\h\9\s\m\b\f\w\8\c\e\m\m\q\w\h\9\h\j\c\f\x\4\p\k\1\v\n\6\t\k\7\2\i\f\u\f\9\8\h\d\d\x\t\6\x\i\f\0\y\f\m\1\i\c\m\n\o\w\s\y\t\5\e\x\2\g\u\m\y\9\k\j\n\z\2\b\w\z\h\5\9\s\d\d\g\0\q\g\q\l\y\1\d\i\5\h\5\f\q\3\0\6\b\o\o\6\i\r\3\b\r\z\s\7\m\l\c\0\x\4\q\u\w\i\j\l\b\g\p\g\g\n\a\w\n\6\i\j\i\c\0\z\u\1\b\z\z\p\e\c\4\u\n\t\8\a\d\2\q\b\8\i\0\e\c\8\y\0\g\9\4\d\b\9\z\j\l\d\h\2\e\9\s\o\2\1\2\n\v\y\5\b\9\x\d\z\z\n\w\r\f\i\l\a\p\4\l\1\p\z\r\3\n\a\g\x\g\g\c\h\d\u\v\x\9\q\5\h\c\9\n\1\l\7\f\c\3\3\j\w\j\1\l\8\n\p\9\7\0\j\x\a\m\2\w\y\1\e\4\y\b\h\t\z\9\g\4\d\4\i\q\x\v\9\2\m\0\9\f\x\i\l\o\6\7\2\p\8\n\n\h\w\c\w\j\d\r\r\3\k\g\s\1\z\0\o\x\k\o\7\n\n\h\f\e\i\i\4\3\s\h\2\9\7\g\0\o\7\x\g\q\c\v\o\r\a\j\x\t\t\i\h\9\k\y\g\8\e\b\m\0\k\8\5\r\k\u\9\3\i\h\u\u\j\n\x\z\q\b\g\d\0\s\l\7\p\i\2\a\j\j\l\e\6\c\i\7\h\r\q\e\o\j\7\u\h\v\l\y\n\h\p\e\3\b\h\c\m\x\v\q\v\u\z\6\f\8\x\3\8\u\c\9\5\h\o\l\v\r\l\0\5\o\r\d\e\z\h\t\b\w\y\s\f\j\b\t\r\9\n\5\u\f\f\6\g\z\7\3\n\i\l\j\7\p\8\4\z\5\o\y\w\u\u\p\b\m\a\c\e\g\l\m\b\2\k\b\5\7\2\0\w\k\2\b\5\m\i\i\o\9\c\v\h\m\x\d\3\8\1\p\x\t\m\o\k\w\g\u\v\6\n\h\b\5\f\0\3\3\2\h\q\o\2\6\t\0\m\l\4\i\8\b\0\w\b\r\t\e\6\2\s\9\d\z\b\x\4\o\m\6\y\u\2\r\3\q\4\a\f\k\o\i\c\w\i\8\z\l\j\j\9\q\b\u\u\j\u\r\3\p\j\x\e\e\r\v\f\f\m\2\d\o\c\1\l\p\l\a\e\l\c\8\o\1\0\s\y\k\y\l\u\3\m\t\2\v\7\6\m\3\k\q\3\i\f\n\o\m\v\2\f\3\0\3\r\a\2\7\n\w\w\x\c\i\f\d\o\g\a\6\b\i\8\w\n\x\e\9\b\h\6\z\f\n\q\f\s\l\q\8\0\9\6\6\n\x\d\s\2\i\6\t\3\v\n\x\1\b\4\w\c\x\d\c\q\2\6\n\8\f\9\q\0\o\y\l\o\x\1\3\7\w\x\h\0\i\4\i\r\l\0\x\1\p\f\n\r\3\j\q\b\j\9\z\p\u\i\b\v\r\m\a\k\5\8\a\z\h\u\2\o\t\9\x\6\p\8\h\a\w\2\k\3\v\3\l\5\x\5\b\u\b\3\j\c\b\e\8\e\x\v\v\g\p\s\6\1\l\2\j\a\4\s\0\k\x\f\e\j\q\a\d\u\0\o\s\s\s\q\o\g\v\8\x\5\0\3\e\e\t\w\z\m\b\d\t\t\a\4\h\5\w\e\6\h\o\o\3\4\s\k\w\p\u\w\1\v\m\l\a\8\h\o\7\4\2\4\l\5\e\r\2\2\i\y\i\7\j\3\6\4\c\u\z\1\v\4\i\3\a\m\r\t\b\j\f\x\0\j\d\v\s\h\9\p\1\d\j\t\f\h\h\2\a\4\k\4\f\g\f\s\0\0\2\o\2\u\l\7\v\s\8\d\x\4\o\6\c\h\t\k\v\k\h\v\g\u\k\l\1\2\g\i\u\e\g\p\e\t\m\c\i\6\o\d\q\y\g\w\d\3\0\r\g\1\t\z\h\d\f\c\n\4\2\g\1\d\h\6\2\f\d\5\y\o\z\3\p\d\a\b\z\f\e\m\6\c\8\t\a\1\p\0\y\v\6\l\7\2\f\p\v\w\l\0\d\a\r\3\z\e\i\o\g\z\o\y\x\d\7\s\k\5\r\z\a\b\5\a\d\q\i\l\o\q\d\x\f\c\4\u\m\r\3\2\b\5\5\3\4\5\u\8\n\0\h\k\m\a\n\p\6\h\3\l\f\n\k\d\h\q\l\7\5\v\s\x\d\l\q\u\v\l\s\q\9\h\l\8\7\a\e\f\g\r\t\2\8\m\8\2\y\q\2\x\y\v\k\b\8\n\e\i\u\8\0\5\j\j\5\q\s\5\8\u\a\i\m\q\y\a\y\9\j\f\4\c\s\a\q\z\3\t\r\m\o\b\f\l\c\u\0\b\b\z\w\w\o\k\5\d\n\6\p\9\h\4\m\u\t\o\y\d\x\m\g\8\t\t\m\4\p\8\b\2\k\2\y\n\b\e\y\t\p\y\b\b\1\e\a\1\r\l\b\a\m\3\c\s\w\l\0\b\d\l\w\s\k\w\b\w\3\3\s\n\9\c\p\h\z\b\5\z\q\s\3\6\t\e\y\h\2\3\m\2\0\8\i\f\i\z\q\y\n\s\s\4\r\d\p\h\s\v\t\8\z\6\1\9\3\c\4\q\t\0\9\v\5\6\1\1\z\v\v\r\0\z\c\w\u\l\r\h\t\n\k\l\8\u\l\t\8\b\1\o\i\x\e\x\b\2\r\b\0\y\q\5\f\e\1\t\h\6\x\c\7\k\5\h\6\y\r\h\x\v\5\y\f\a\q\i\3\z\4\7\w\e\5\8\n\l\g\7\9\4\w\6\t\g\o\y\n\b\b\1\x\k\t\3\o\c\b\8\3\b\w\j\d\8\5\l\e\o\r\k\x\9\m\q\o\n\w\f\u\y\6\x\v\i\x\7\l\p\7\a\q\a\u\q\o\x\w\n\z\n\6\m\y\j\m\6\h\i\c\p\m\8\b\p\e\w\n\i\g\m\m\z\1\z\a\a\q\0\d\f\n\j\b\b\1\o\p\e\7\p\l\d\f\t\w\9\l\d\4\q\3\w\5\s\k\e\1\e\x\w\a\v\m\u\e\i\0\y\t\a\s\l\9\m\7\6\d\h\h\s\j\o\z\k\b\y\y\v\9\3\k\h\e\u\8\j\q\0\f\k\e\t\5\k\q\d\q\g\q\x\x\i\7\w\1\0\i\v\0\2\b\r\d\c\m\v\q\l\y\y\i\r\2\5\d\r\7\4\h\e\i\u\a\2\m\g\b\e\z\o\8\w\t\i\y\b\3\r\n\7\f\r\m\1\x\y\3\q\1\c\7\w\9\y\d\9\d\8\2\q\q\k\0\6\s\m\8\w\z\9\6\j\l\l\t\n\e\c\z\s\f\4\1\u\6\e\z\6\w\j\8\5\7\0\k\g\r\o\e\w\t\z\9\z\p\4\d\u\e\0\9\m\s\x\a\m\7\k\o\a\o\5\t\i\j\5\u\8\8\t\2\p\4\g\i\y\f\4\6\k\2\k\h\3\2\8\7\6\j\z\x\x\m\4\6\l\5\9\c\k\f\h\l\z\b\3\v\n\c\4\2\z\s\g\2\k\9\g\f\t\h\l\7\e\u\f\e\0\3\u\x\n\1\e\b\m\x\m\9\u\b\b\r\b\a\u\z\3\p\5\0\6\v\h\w\m\i\o\n\w\0\4\b\k\p\s\e\6\n\a\l\x\x\4\o\p\1\e\6\n\s\y\2\d\m\z\u\8\l\9\b\b\a\k\c\y\2\k\7\s\n\j\r\x\y\4\o\y\8\s\0\8\3\k\6\i\r\9\z\b\p\j\x\e\a\x\b\f\z\3\3\p\1\t\2\h\l\3\b\k\9\p\1\y\j\k\o\w\h\t\q\n\4\y\7\7\i\k\t\v\q\e\w\5\m\q\4\r\z\o\a\0\o\w\p\w\r\k\c\l\3\6\5\v\e\x\i\b\x\d\t\a\f\v\f\5\e\g\o\w\g\y\q\w\2\7\y\t\9\c\c\5\9\w\v\e\0\t\q\l\p\4\m\e\l\o\l\8\x\6\c\1\2\u\7\4\e\b\k\y\4\e\k\g\5\v\w\r\v\n\d\t\v\8\p\c\h\j\v\3\l\r\c\h\g\p\3\l\b\e\i\d\q\d\n\j\a\a\f\9\z\p\d\v\j\v\r\f\p\y\3\y\j\d\f\m\f\y\u\p\r\u\4\3\3\v\v\w\u\u\z\5\l\4\w\i\b\b\3\j\w\6\s\n\t\k\p\w\t\a\3\e\7\9\f\k\u\q\g\w\e\s\7\0\n\2\x\l\q\n\u\e\r\x\4\c\k\i\a\5\y\j\7\2\a\z\h\2\t\v\q\2\c\o\m\c\a\m\2\3\6\q\8\e\b\4\1\g\l\7\z\k\m\r\o\c\7\6\k\o\5\e\a\t\f\7\m\w\j\s\q\4\x\s\l\c\f\p\k\7\g\1\q\l\b\r\h\i\v\p\x\u\y\h\2\h\s\x\g\f\v\d\b\y\p\d\d\v\d\g\o\y\j\j\n\v\o\c\o\k\1\c\t\2\0\u\z\u\m\0\w\r\j\u\r\u\k\o\a\g\g\i\x\9\b\g\j\j\v\v\4\5\1\4\o\c\s\n\6\j\z\n\3\v\w\s\5\o\6\w\4\x\h\q\s\u\u\o\z\u\n\1\4\6\x\d\1\5\l\v\4\i\c\6\t\j\6\6\4\l\8\q\y\m\0\u\6\l\d\7\1\s\u\g\2\a\u\g\0\z\m\t\a\b\o\v\f\f\9\v\z\r\u\f\l\x\o\e\7\0\k\w\5\4\8\6\x\a\l\x\o\4\y\j\2\p\r\4\6\z\y\1\k\c\m\c\c\y\f\4\7\j\f\s\o\o\l\g\9\6\p\q\0\u\k\f\k\0\0\2\t\1\m\5\g\4\3\q\x\4\m\t\z\r\3\2\s\x\r\q\j\f\l\a\v\r\w\6\d\j\w\p\b\1\d\o\k\x\k\s\9\z\7\l\2\6\p\m\8\0\z\t\q\u\u\4\c\2\q\e\k\e\g\6\1\0\i\h\j\h\u\5\4\k\v\7\h\q\6\w\z\0\7\y\8\y\d\8\z\c\3\z\j\0\2\a\r\u\p\7\7\z\p\u\h\h\c\j\u\j\c\u\w\g\c\e\r\a\x\r\d\b\1\9\u\p\a\x\8\u\8\n\1\7\m\7\i\k\y\q\o\h\f\e\h\d\9\n\x\c\5\3\n\s\u\6\c\j\j\c\3\p\n\j\a\j\x\s\y\p\c\2\2\f\t\c\x\h\2\k\f\o\l\5\i\a\u\7\5\t\p\z\w\5\6\a\9\m\n\i\9\9\p\f\u\i\f\n\p\f\z\h\y\b\w\r\m\s\4\e\5\a\v\e\d\1\c\q\j\i\o\t\2\x\4\n\i\k\z\x\h\2\w\5\8\0\9\9\t\5\v\b\3\9\0\6\u\g\k\9\f\w\0\b\v\i\a\l\j\x\3\u\8\h\0\u\1\a\0\w\d\s\o\c\a\f\h\x\l\a\p\y\v\r\3\t\m\j\9\6\c\s\5\u\7\0\h\t\s\a\3\l\g\k\j\u\i\f\y\l\e\9\f\c\f\3\j\j\a\g\4\y\2\t\u\d\m\o\4\i\t\h\3\5\h\d\w\d\v\j\o\a\9\t\l\d\j\a\2\o\y\e\w\6\1\g\h\i\l\p\k\i\c\c\o\j\j\d\b\a\c\5\o\7\3\v\s\y\r\u\x\8\o\b\w\p\o\h\3\k\f\4\0\f\2\n\c\1\t\9\l\r\c\s\k\j\f\5\r\6\y\b\a\d\p\q\d\d\d\u\p\y\9\o\i\1\1\d\a\5\p\3\m\r\t\1\1\c\d\t\v\e\h\7\t\b\v\q\g\h\7\j\h\1\s\f\a\q\r\g\s\j\c\d\a\0\y\k\j\1\d\4\j\6\f\p\t\o\7\7\8\z\r\k\g\o\l\i\r\w\e\z\s\t\e\0\n\w\h\m\4\b\2\5\c\n\g\h\l\7\v\o\w\n\w\1\9\n\h\5\0\y\a\u\x\j\x\2\8\l\9\7\j\w\5\r\6\p\y\l\q\0\w\3\n\k\2\q\0\u\1\g\g\4\c\q\j\w\1\8\q\u\m\8\i\0\w\j\o\x\4\7\l\v\z\a\a\m\f\v\8\z\q\5\j\v\i\r\y\k\r\5\l ]] 00:06:56.147 ************************************ 00:06:56.147 END TEST dd_rw_offset 00:06:56.147 ************************************ 00:06:56.147 00:06:56.147 real 0m1.084s 00:06:56.147 user 0m0.767s 00:06:56.147 sys 0m0.193s 00:06:56.147 00:19:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.147 00:19:11 -- common/autotest_common.sh@10 -- # set +x 00:06:56.147 00:19:11 -- dd/basic_rw.sh@1 -- # cleanup 00:06:56.147 00:19:11 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:56.147 00:19:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:56.147 00:19:11 -- dd/common.sh@11 -- # local nvme_ref= 00:06:56.147 00:19:11 -- dd/common.sh@12 -- # local size=0xffff 00:06:56.147 00:19:11 -- dd/common.sh@14 -- # local bs=1048576 00:06:56.147 00:19:11 -- dd/common.sh@15 -- # local count=1 00:06:56.147 00:19:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:56.147 00:19:11 -- dd/common.sh@18 -- # gen_conf 00:06:56.147 00:19:11 -- dd/common.sh@31 -- # xtrace_disable 00:06:56.147 00:19:11 -- common/autotest_common.sh@10 -- # set +x 00:06:56.147 [2024-09-29 00:19:11.975940] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.147 [2024-09-29 00:19:11.976038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57926 ] 00:06:56.147 { 00:06:56.147 "subsystems": [ 00:06:56.147 { 00:06:56.147 "subsystem": "bdev", 00:06:56.147 "config": [ 00:06:56.147 { 00:06:56.147 "params": { 00:06:56.147 "trtype": "pcie", 00:06:56.147 "traddr": "0000:00:06.0", 00:06:56.147 "name": "Nvme0" 00:06:56.147 }, 00:06:56.147 "method": "bdev_nvme_attach_controller" 00:06:56.147 }, 00:06:56.147 { 00:06:56.147 "method": "bdev_wait_for_examine" 00:06:56.147 } 00:06:56.147 ] 00:06:56.147 } 00:06:56.147 ] 00:06:56.147 } 00:06:56.407 [2024-09-29 00:19:12.113296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.407 [2024-09-29 00:19:12.162119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.666  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:56.666 00:06:56.666 00:19:12 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.666 00:06:56.666 real 0m14.833s 00:06:56.666 user 0m10.743s 00:06:56.666 sys 0m2.711s 00:06:56.666 ************************************ 00:06:56.666 END TEST spdk_dd_basic_rw 00:06:56.666 ************************************ 00:06:56.666 00:19:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.666 00:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.666 00:19:12 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:56.666 00:19:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.666 00:19:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.666 00:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.666 ************************************ 00:06:56.666 START TEST spdk_dd_posix 00:06:56.666 ************************************ 00:06:56.666 00:19:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:56.926 * Looking for test storage... 00:06:56.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:56.927 00:19:12 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.927 00:19:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.927 00:19:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.927 00:19:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.927 00:19:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.927 00:19:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.927 00:19:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.927 00:19:12 -- paths/export.sh@5 -- # export PATH 00:06:56.927 00:19:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.927 00:19:12 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:56.927 00:19:12 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:56.927 00:19:12 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:56.927 00:19:12 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:56.927 00:19:12 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.927 00:19:12 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.927 00:19:12 -- dd/posix.sh@130 -- # tests 00:06:56.927 00:19:12 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:56.927 * First test run, liburing in use 00:06:56.927 00:19:12 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:56.927 00:19:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.927 00:19:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.927 00:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.927 ************************************ 00:06:56.927 START TEST dd_flag_append 00:06:56.927 ************************************ 00:06:56.927 00:19:12 -- common/autotest_common.sh@1104 -- # append 00:06:56.927 00:19:12 -- dd/posix.sh@16 -- # local dump0 00:06:56.927 00:19:12 -- dd/posix.sh@17 -- # local dump1 00:06:56.927 00:19:12 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:56.927 00:19:12 -- dd/common.sh@98 -- # xtrace_disable 00:06:56.927 00:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.927 00:19:12 -- dd/posix.sh@19 -- # dump0=ntbrst0aqvdi31fyjn3nzmlquuhjgoxf 00:06:56.927 00:19:12 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:56.927 00:19:12 -- dd/common.sh@98 -- # xtrace_disable 00:06:56.927 00:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.927 00:19:12 -- dd/posix.sh@20 -- # dump1=n24ztuy7d0bvidmx4rrcoa8dlhe6jxop 00:06:56.927 00:19:12 -- dd/posix.sh@22 -- # printf %s ntbrst0aqvdi31fyjn3nzmlquuhjgoxf 00:06:56.927 00:19:12 -- dd/posix.sh@23 -- # printf %s n24ztuy7d0bvidmx4rrcoa8dlhe6jxop 00:06:56.927 00:19:12 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:56.927 [2024-09-29 00:19:12.658359] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:56.927 [2024-09-29 00:19:12.658473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57988 ] 00:06:57.186 [2024-09-29 00:19:12.786354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.186 [2024-09-29 00:19:12.833278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.186  Copying: 32/32 [B] (average 31 kBps) 00:06:57.186 00:06:57.446 00:19:13 -- dd/posix.sh@27 -- # [[ n24ztuy7d0bvidmx4rrcoa8dlhe6jxopntbrst0aqvdi31fyjn3nzmlquuhjgoxf == \n\2\4\z\t\u\y\7\d\0\b\v\i\d\m\x\4\r\r\c\o\a\8\d\l\h\e\6\j\x\o\p\n\t\b\r\s\t\0\a\q\v\d\i\3\1\f\y\j\n\3\n\z\m\l\q\u\u\h\j\g\o\x\f ]] 00:06:57.446 00:06:57.446 real 0m0.439s 00:06:57.446 user 0m0.233s 00:06:57.446 sys 0m0.089s 00:06:57.446 00:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.446 00:19:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.446 ************************************ 00:06:57.446 END TEST dd_flag_append 00:06:57.446 ************************************ 00:06:57.446 00:19:13 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:57.446 00:19:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:57.446 00:19:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.446 00:19:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.446 ************************************ 00:06:57.446 START TEST dd_flag_directory 00:06:57.446 ************************************ 00:06:57.446 00:19:13 -- common/autotest_common.sh@1104 -- # directory 00:06:57.446 00:19:13 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.446 00:19:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.446 00:19:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.446 00:19:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.446 00:19:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.446 00:19:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.446 00:19:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.446 00:19:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.446 00:19:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.446 00:19:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.446 00:19:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.446 00:19:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.446 [2024-09-29 00:19:13.149005] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.446 [2024-09-29 00:19:13.149112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58009 ] 00:06:57.446 [2024-09-29 00:19:13.285508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.705 [2024-09-29 00:19:13.335473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.705 [2024-09-29 00:19:13.378253] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.705 [2024-09-29 00:19:13.378321] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:57.705 [2024-09-29 00:19:13.378358] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.705 [2024-09-29 00:19:13.440973] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:57.705 00:19:13 -- common/autotest_common.sh@643 -- # es=236 00:06:57.705 00:19:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:57.705 00:19:13 -- common/autotest_common.sh@652 -- # es=108 00:06:57.706 00:19:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:57.706 00:19:13 -- common/autotest_common.sh@660 -- # es=1 00:06:57.706 00:19:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:57.706 00:19:13 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.706 00:19:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.706 00:19:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.706 00:19:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.706 00:19:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.706 00:19:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.706 00:19:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.706 00:19:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.706 00:19:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.706 00:19:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.706 00:19:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.706 00:19:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:57.965 [2024-09-29 00:19:13.598570] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:57.965 [2024-09-29 00:19:13.598671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58024 ] 00:06:57.965 [2024-09-29 00:19:13.736353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.965 [2024-09-29 00:19:13.786568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.225 [2024-09-29 00:19:13.830390] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:58.225 [2024-09-29 00:19:13.830456] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:58.225 [2024-09-29 00:19:13.830485] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.225 [2024-09-29 00:19:13.887845] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:58.225 00:19:13 -- common/autotest_common.sh@643 -- # es=236 00:06:58.225 00:19:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.225 00:19:13 -- common/autotest_common.sh@652 -- # es=108 00:06:58.225 00:19:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.225 00:19:13 -- common/autotest_common.sh@660 -- # es=1 00:06:58.225 00:19:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.225 00:06:58.225 real 0m0.884s 00:06:58.225 user 0m0.503s 00:06:58.225 sys 0m0.168s 00:06:58.225 00:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.225 00:19:13 -- common/autotest_common.sh@10 -- # set +x 00:06:58.225 ************************************ 00:06:58.225 END TEST dd_flag_directory 00:06:58.225 ************************************ 00:06:58.225 00:19:14 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:58.225 00:19:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.225 00:19:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.225 00:19:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.225 ************************************ 00:06:58.225 START TEST dd_flag_nofollow 00:06:58.225 ************************************ 00:06:58.225 00:19:14 -- common/autotest_common.sh@1104 -- # nofollow 00:06:58.225 00:19:14 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.225 00:19:14 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.225 00:19:14 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.225 00:19:14 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.225 00:19:14 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.225 00:19:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.225 00:19:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.225 00:19:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.225 00:19:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.225 00:19:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.225 00:19:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.225 00:19:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.225 00:19:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.225 00:19:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.225 00:19:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.225 00:19:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.495 [2024-09-29 00:19:14.086118] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:58.495 [2024-09-29 00:19:14.086218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58047 ] 00:06:58.495 [2024-09-29 00:19:14.224370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.495 [2024-09-29 00:19:14.270938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.495 [2024-09-29 00:19:14.312795] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:58.495 [2024-09-29 00:19:14.312861] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:58.495 [2024-09-29 00:19:14.312888] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.795 [2024-09-29 00:19:14.372667] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:58.795 00:19:14 -- common/autotest_common.sh@643 -- # es=216 00:06:58.795 00:19:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.795 00:19:14 -- common/autotest_common.sh@652 -- # es=88 00:06:58.795 00:19:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.795 00:19:14 -- common/autotest_common.sh@660 -- # es=1 00:06:58.795 00:19:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.795 00:19:14 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.795 00:19:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.795 00:19:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.795 00:19:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.795 00:19:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.795 00:19:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.795 00:19:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.795 00:19:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.795 00:19:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.795 00:19:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:58.795 00:19:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:58.795 00:19:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:58.795 [2024-09-29 00:19:14.537462] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:58.795 [2024-09-29 00:19:14.537580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58057 ] 00:06:59.055 [2024-09-29 00:19:14.672630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.055 [2024-09-29 00:19:14.718992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.055 [2024-09-29 00:19:14.760908] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:59.055 [2024-09-29 00:19:14.760975] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:59.055 [2024-09-29 00:19:14.761004] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.055 [2024-09-29 00:19:14.824433] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:59.315 00:19:14 -- common/autotest_common.sh@643 -- # es=216 00:06:59.315 00:19:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.315 00:19:14 -- common/autotest_common.sh@652 -- # es=88 00:06:59.315 00:19:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:59.315 00:19:14 -- common/autotest_common.sh@660 -- # es=1 00:06:59.315 00:19:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.315 00:19:14 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:59.315 00:19:14 -- dd/common.sh@98 -- # xtrace_disable 00:06:59.315 00:19:14 -- common/autotest_common.sh@10 -- # set +x 00:06:59.315 00:19:14 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.315 [2024-09-29 00:19:15.001310] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:06:59.315 [2024-09-29 00:19:15.001442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:06:59.315 [2024-09-29 00:19:15.136615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.575 [2024-09-29 00:19:15.198997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.834  Copying: 512/512 [B] (average 500 kBps) 00:06:59.834 00:06:59.834 00:19:15 -- dd/posix.sh@49 -- # [[ nfheo8te36o78sq57bwpadafg4n2trpt6nrpnjmspwr6ut4e93t3cjsya07c4bqvj5x6925l5m3jjtz7oz9jky4adpla57bscusccagc7g4hnhs5jqy4z0xzanryt4n23ska1am7io60pq6f9186wosuvakmr668ng3ic0bhcaam7y2rrym01pkpgnjm6fm4jqy67e316xffbmv4ksro3o9mtf4ha05z5yukhc53d4q3yf5e2kujv3b3mnpk2s2mkq7sevhdq0ke491ixtwtds3bl791iyy9qygyj8pajf4vp39jx17qski27u4kf4zvqfy4shc5ycohgaxf1lr3oq3tpku5eq6x8pvmda46eqm7v7e01a1tdoc2kdzymwmbp0uogr9wc7821t33q87lyn5y09g2c489by5ztorl5xuqgs9r0xh4c8nn4rb7ckd998hhdkzog74x7qa99z67nuko2lw35et159ydst68el6sah33cz3st9xs13efjwe1 == \n\f\h\e\o\8\t\e\3\6\o\7\8\s\q\5\7\b\w\p\a\d\a\f\g\4\n\2\t\r\p\t\6\n\r\p\n\j\m\s\p\w\r\6\u\t\4\e\9\3\t\3\c\j\s\y\a\0\7\c\4\b\q\v\j\5\x\6\9\2\5\l\5\m\3\j\j\t\z\7\o\z\9\j\k\y\4\a\d\p\l\a\5\7\b\s\c\u\s\c\c\a\g\c\7\g\4\h\n\h\s\5\j\q\y\4\z\0\x\z\a\n\r\y\t\4\n\2\3\s\k\a\1\a\m\7\i\o\6\0\p\q\6\f\9\1\8\6\w\o\s\u\v\a\k\m\r\6\6\8\n\g\3\i\c\0\b\h\c\a\a\m\7\y\2\r\r\y\m\0\1\p\k\p\g\n\j\m\6\f\m\4\j\q\y\6\7\e\3\1\6\x\f\f\b\m\v\4\k\s\r\o\3\o\9\m\t\f\4\h\a\0\5\z\5\y\u\k\h\c\5\3\d\4\q\3\y\f\5\e\2\k\u\j\v\3\b\3\m\n\p\k\2\s\2\m\k\q\7\s\e\v\h\d\q\0\k\e\4\9\1\i\x\t\w\t\d\s\3\b\l\7\9\1\i\y\y\9\q\y\g\y\j\8\p\a\j\f\4\v\p\3\9\j\x\1\7\q\s\k\i\2\7\u\4\k\f\4\z\v\q\f\y\4\s\h\c\5\y\c\o\h\g\a\x\f\1\l\r\3\o\q\3\t\p\k\u\5\e\q\6\x\8\p\v\m\d\a\4\6\e\q\m\7\v\7\e\0\1\a\1\t\d\o\c\2\k\d\z\y\m\w\m\b\p\0\u\o\g\r\9\w\c\7\8\2\1\t\3\3\q\8\7\l\y\n\5\y\0\9\g\2\c\4\8\9\b\y\5\z\t\o\r\l\5\x\u\q\g\s\9\r\0\x\h\4\c\8\n\n\4\r\b\7\c\k\d\9\9\8\h\h\d\k\z\o\g\7\4\x\7\q\a\9\9\z\6\7\n\u\k\o\2\l\w\3\5\e\t\1\5\9\y\d\s\t\6\8\e\l\6\s\a\h\3\3\c\z\3\s\t\9\x\s\1\3\e\f\j\w\e\1 ]] 00:06:59.834 00:06:59.834 real 0m1.423s 00:06:59.834 user 0m0.817s 00:06:59.834 sys 0m0.269s 00:06:59.834 00:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.834 00:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.834 ************************************ 00:06:59.834 END TEST dd_flag_nofollow 00:06:59.834 ************************************ 00:06:59.834 00:19:15 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:59.834 00:19:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:59.834 00:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.834 00:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.834 ************************************ 00:06:59.834 START TEST dd_flag_noatime 00:06:59.834 ************************************ 00:06:59.834 00:19:15 -- common/autotest_common.sh@1104 -- # noatime 00:06:59.834 00:19:15 -- dd/posix.sh@53 -- # local atime_if 00:06:59.834 00:19:15 -- dd/posix.sh@54 -- # local atime_of 00:06:59.834 00:19:15 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:59.834 00:19:15 -- dd/common.sh@98 -- # xtrace_disable 00:06:59.834 00:19:15 -- common/autotest_common.sh@10 -- # set +x 00:06:59.834 00:19:15 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.834 00:19:15 -- dd/posix.sh@60 -- # atime_if=1727569155 00:06:59.834 00:19:15 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.834 00:19:15 -- dd/posix.sh@61 -- # atime_of=1727569155 00:06:59.834 00:19:15 -- dd/posix.sh@66 -- # sleep 1 00:07:00.770 00:19:16 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:00.770 [2024-09-29 00:19:16.580048] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:00.770 [2024-09-29 00:19:16.580150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58105 ] 00:07:01.030 [2024-09-29 00:19:16.720578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.030 [2024-09-29 00:19:16.790550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.289  Copying: 512/512 [B] (average 500 kBps) 00:07:01.289 00:07:01.289 00:19:17 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.289 00:19:17 -- dd/posix.sh@69 -- # (( atime_if == 1727569155 )) 00:07:01.289 00:19:17 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.289 00:19:17 -- dd/posix.sh@70 -- # (( atime_of == 1727569155 )) 00:07:01.289 00:19:17 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.289 [2024-09-29 00:19:17.068109] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:01.289 [2024-09-29 00:19:17.068236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58116 ] 00:07:01.548 [2024-09-29 00:19:17.204386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.549 [2024-09-29 00:19:17.250880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.808  Copying: 512/512 [B] (average 500 kBps) 00:07:01.808 00:07:01.808 00:19:17 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.808 00:19:17 -- dd/posix.sh@73 -- # (( atime_if < 1727569157 )) 00:07:01.808 00:07:01.808 real 0m1.966s 00:07:01.808 user 0m0.522s 00:07:01.808 sys 0m0.202s 00:07:01.808 00:19:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.808 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.808 ************************************ 00:07:01.808 END TEST dd_flag_noatime 00:07:01.808 ************************************ 00:07:01.808 00:19:17 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:01.808 00:19:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.808 00:19:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.808 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.808 ************************************ 00:07:01.808 START TEST dd_flags_misc 00:07:01.808 ************************************ 00:07:01.808 00:19:17 -- common/autotest_common.sh@1104 -- # io 00:07:01.808 00:19:17 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:01.808 00:19:17 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:01.808 00:19:17 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:01.808 00:19:17 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:01.808 00:19:17 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:01.808 00:19:17 -- dd/common.sh@98 -- # xtrace_disable 00:07:01.808 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.808 00:19:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.808 00:19:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:01.808 [2024-09-29 00:19:17.582611] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:01.808 [2024-09-29 00:19:17.582731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58143 ] 00:07:02.067 [2024-09-29 00:19:17.718238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.067 [2024-09-29 00:19:17.798819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.326  Copying: 512/512 [B] (average 500 kBps) 00:07:02.326 00:07:02.326 00:19:18 -- dd/posix.sh@93 -- # [[ deq2xfws1q8iqz9yfvg391k25wz33s9kosf75vfx7c6itfm1e830a8wcfovags2461czyl25pbgk30yb63linj5t74pbky3gb20l7x4d9l21lf2c9s7mqujbsyju0r46a7kbn3i8i3uhufyjtghbicxufxcibl4j31b174y4tjfpkgc9osvddzvzaqp2ncr7uwwice5fgcujk7psxwm067fml14qeqyyrmqpv1u1t7k3kzaq94ptnd0qsc7p2eltzph4y892oinp0vz7n7xac3xsdu2ck5pe3kac9j3azgns6aawgfon31go3f89z0oum7pzptv65k9am18cct18r09x8zu3y3qpo3i9q52xkvv508669ksuf3ind17gjncolkreh2vnfoaursbtuj8b849jqsy0skuojmijwgs0yjne5swhjd0b2vo2j0z6qszy4izao5b5xpyzuyh2bj33zwelet8edyut4a6642hptev0cedesivxfy4d0wpmha8p == \d\e\q\2\x\f\w\s\1\q\8\i\q\z\9\y\f\v\g\3\9\1\k\2\5\w\z\3\3\s\9\k\o\s\f\7\5\v\f\x\7\c\6\i\t\f\m\1\e\8\3\0\a\8\w\c\f\o\v\a\g\s\2\4\6\1\c\z\y\l\2\5\p\b\g\k\3\0\y\b\6\3\l\i\n\j\5\t\7\4\p\b\k\y\3\g\b\2\0\l\7\x\4\d\9\l\2\1\l\f\2\c\9\s\7\m\q\u\j\b\s\y\j\u\0\r\4\6\a\7\k\b\n\3\i\8\i\3\u\h\u\f\y\j\t\g\h\b\i\c\x\u\f\x\c\i\b\l\4\j\3\1\b\1\7\4\y\4\t\j\f\p\k\g\c\9\o\s\v\d\d\z\v\z\a\q\p\2\n\c\r\7\u\w\w\i\c\e\5\f\g\c\u\j\k\7\p\s\x\w\m\0\6\7\f\m\l\1\4\q\e\q\y\y\r\m\q\p\v\1\u\1\t\7\k\3\k\z\a\q\9\4\p\t\n\d\0\q\s\c\7\p\2\e\l\t\z\p\h\4\y\8\9\2\o\i\n\p\0\v\z\7\n\7\x\a\c\3\x\s\d\u\2\c\k\5\p\e\3\k\a\c\9\j\3\a\z\g\n\s\6\a\a\w\g\f\o\n\3\1\g\o\3\f\8\9\z\0\o\u\m\7\p\z\p\t\v\6\5\k\9\a\m\1\8\c\c\t\1\8\r\0\9\x\8\z\u\3\y\3\q\p\o\3\i\9\q\5\2\x\k\v\v\5\0\8\6\6\9\k\s\u\f\3\i\n\d\1\7\g\j\n\c\o\l\k\r\e\h\2\v\n\f\o\a\u\r\s\b\t\u\j\8\b\8\4\9\j\q\s\y\0\s\k\u\o\j\m\i\j\w\g\s\0\y\j\n\e\5\s\w\h\j\d\0\b\2\v\o\2\j\0\z\6\q\s\z\y\4\i\z\a\o\5\b\5\x\p\y\z\u\y\h\2\b\j\3\3\z\w\e\l\e\t\8\e\d\y\u\t\4\a\6\6\4\2\h\p\t\e\v\0\c\e\d\e\s\i\v\x\f\y\4\d\0\w\p\m\h\a\8\p ]] 00:07:02.326 00:19:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.326 00:19:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:02.326 [2024-09-29 00:19:18.115537] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.326 [2024-09-29 00:19:18.115663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58150 ] 00:07:02.585 [2024-09-29 00:19:18.252553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.585 [2024-09-29 00:19:18.311900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.844  Copying: 512/512 [B] (average 500 kBps) 00:07:02.844 00:07:02.844 00:19:18 -- dd/posix.sh@93 -- # [[ deq2xfws1q8iqz9yfvg391k25wz33s9kosf75vfx7c6itfm1e830a8wcfovags2461czyl25pbgk30yb63linj5t74pbky3gb20l7x4d9l21lf2c9s7mqujbsyju0r46a7kbn3i8i3uhufyjtghbicxufxcibl4j31b174y4tjfpkgc9osvddzvzaqp2ncr7uwwice5fgcujk7psxwm067fml14qeqyyrmqpv1u1t7k3kzaq94ptnd0qsc7p2eltzph4y892oinp0vz7n7xac3xsdu2ck5pe3kac9j3azgns6aawgfon31go3f89z0oum7pzptv65k9am18cct18r09x8zu3y3qpo3i9q52xkvv508669ksuf3ind17gjncolkreh2vnfoaursbtuj8b849jqsy0skuojmijwgs0yjne5swhjd0b2vo2j0z6qszy4izao5b5xpyzuyh2bj33zwelet8edyut4a6642hptev0cedesivxfy4d0wpmha8p == \d\e\q\2\x\f\w\s\1\q\8\i\q\z\9\y\f\v\g\3\9\1\k\2\5\w\z\3\3\s\9\k\o\s\f\7\5\v\f\x\7\c\6\i\t\f\m\1\e\8\3\0\a\8\w\c\f\o\v\a\g\s\2\4\6\1\c\z\y\l\2\5\p\b\g\k\3\0\y\b\6\3\l\i\n\j\5\t\7\4\p\b\k\y\3\g\b\2\0\l\7\x\4\d\9\l\2\1\l\f\2\c\9\s\7\m\q\u\j\b\s\y\j\u\0\r\4\6\a\7\k\b\n\3\i\8\i\3\u\h\u\f\y\j\t\g\h\b\i\c\x\u\f\x\c\i\b\l\4\j\3\1\b\1\7\4\y\4\t\j\f\p\k\g\c\9\o\s\v\d\d\z\v\z\a\q\p\2\n\c\r\7\u\w\w\i\c\e\5\f\g\c\u\j\k\7\p\s\x\w\m\0\6\7\f\m\l\1\4\q\e\q\y\y\r\m\q\p\v\1\u\1\t\7\k\3\k\z\a\q\9\4\p\t\n\d\0\q\s\c\7\p\2\e\l\t\z\p\h\4\y\8\9\2\o\i\n\p\0\v\z\7\n\7\x\a\c\3\x\s\d\u\2\c\k\5\p\e\3\k\a\c\9\j\3\a\z\g\n\s\6\a\a\w\g\f\o\n\3\1\g\o\3\f\8\9\z\0\o\u\m\7\p\z\p\t\v\6\5\k\9\a\m\1\8\c\c\t\1\8\r\0\9\x\8\z\u\3\y\3\q\p\o\3\i\9\q\5\2\x\k\v\v\5\0\8\6\6\9\k\s\u\f\3\i\n\d\1\7\g\j\n\c\o\l\k\r\e\h\2\v\n\f\o\a\u\r\s\b\t\u\j\8\b\8\4\9\j\q\s\y\0\s\k\u\o\j\m\i\j\w\g\s\0\y\j\n\e\5\s\w\h\j\d\0\b\2\v\o\2\j\0\z\6\q\s\z\y\4\i\z\a\o\5\b\5\x\p\y\z\u\y\h\2\b\j\3\3\z\w\e\l\e\t\8\e\d\y\u\t\4\a\6\6\4\2\h\p\t\e\v\0\c\e\d\e\s\i\v\x\f\y\4\d\0\w\p\m\h\a\8\p ]] 00:07:02.844 00:19:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.844 00:19:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:02.844 [2024-09-29 00:19:18.592440] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:02.844 [2024-09-29 00:19:18.592533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58158 ] 00:07:03.103 [2024-09-29 00:19:18.729682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.103 [2024-09-29 00:19:18.782939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.361  Copying: 512/512 [B] (average 500 kBps) 00:07:03.361 00:07:03.362 00:19:18 -- dd/posix.sh@93 -- # [[ deq2xfws1q8iqz9yfvg391k25wz33s9kosf75vfx7c6itfm1e830a8wcfovags2461czyl25pbgk30yb63linj5t74pbky3gb20l7x4d9l21lf2c9s7mqujbsyju0r46a7kbn3i8i3uhufyjtghbicxufxcibl4j31b174y4tjfpkgc9osvddzvzaqp2ncr7uwwice5fgcujk7psxwm067fml14qeqyyrmqpv1u1t7k3kzaq94ptnd0qsc7p2eltzph4y892oinp0vz7n7xac3xsdu2ck5pe3kac9j3azgns6aawgfon31go3f89z0oum7pzptv65k9am18cct18r09x8zu3y3qpo3i9q52xkvv508669ksuf3ind17gjncolkreh2vnfoaursbtuj8b849jqsy0skuojmijwgs0yjne5swhjd0b2vo2j0z6qszy4izao5b5xpyzuyh2bj33zwelet8edyut4a6642hptev0cedesivxfy4d0wpmha8p == \d\e\q\2\x\f\w\s\1\q\8\i\q\z\9\y\f\v\g\3\9\1\k\2\5\w\z\3\3\s\9\k\o\s\f\7\5\v\f\x\7\c\6\i\t\f\m\1\e\8\3\0\a\8\w\c\f\o\v\a\g\s\2\4\6\1\c\z\y\l\2\5\p\b\g\k\3\0\y\b\6\3\l\i\n\j\5\t\7\4\p\b\k\y\3\g\b\2\0\l\7\x\4\d\9\l\2\1\l\f\2\c\9\s\7\m\q\u\j\b\s\y\j\u\0\r\4\6\a\7\k\b\n\3\i\8\i\3\u\h\u\f\y\j\t\g\h\b\i\c\x\u\f\x\c\i\b\l\4\j\3\1\b\1\7\4\y\4\t\j\f\p\k\g\c\9\o\s\v\d\d\z\v\z\a\q\p\2\n\c\r\7\u\w\w\i\c\e\5\f\g\c\u\j\k\7\p\s\x\w\m\0\6\7\f\m\l\1\4\q\e\q\y\y\r\m\q\p\v\1\u\1\t\7\k\3\k\z\a\q\9\4\p\t\n\d\0\q\s\c\7\p\2\e\l\t\z\p\h\4\y\8\9\2\o\i\n\p\0\v\z\7\n\7\x\a\c\3\x\s\d\u\2\c\k\5\p\e\3\k\a\c\9\j\3\a\z\g\n\s\6\a\a\w\g\f\o\n\3\1\g\o\3\f\8\9\z\0\o\u\m\7\p\z\p\t\v\6\5\k\9\a\m\1\8\c\c\t\1\8\r\0\9\x\8\z\u\3\y\3\q\p\o\3\i\9\q\5\2\x\k\v\v\5\0\8\6\6\9\k\s\u\f\3\i\n\d\1\7\g\j\n\c\o\l\k\r\e\h\2\v\n\f\o\a\u\r\s\b\t\u\j\8\b\8\4\9\j\q\s\y\0\s\k\u\o\j\m\i\j\w\g\s\0\y\j\n\e\5\s\w\h\j\d\0\b\2\v\o\2\j\0\z\6\q\s\z\y\4\i\z\a\o\5\b\5\x\p\y\z\u\y\h\2\b\j\3\3\z\w\e\l\e\t\8\e\d\y\u\t\4\a\6\6\4\2\h\p\t\e\v\0\c\e\d\e\s\i\v\x\f\y\4\d\0\w\p\m\h\a\8\p ]] 00:07:03.362 00:19:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.362 00:19:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:03.362 [2024-09-29 00:19:19.048644] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:03.362 [2024-09-29 00:19:19.048739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58165 ] 00:07:03.362 [2024-09-29 00:19:19.184252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.619 [2024-09-29 00:19:19.233241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.619  Copying: 512/512 [B] (average 500 kBps) 00:07:03.619 00:07:03.619 00:19:19 -- dd/posix.sh@93 -- # [[ deq2xfws1q8iqz9yfvg391k25wz33s9kosf75vfx7c6itfm1e830a8wcfovags2461czyl25pbgk30yb63linj5t74pbky3gb20l7x4d9l21lf2c9s7mqujbsyju0r46a7kbn3i8i3uhufyjtghbicxufxcibl4j31b174y4tjfpkgc9osvddzvzaqp2ncr7uwwice5fgcujk7psxwm067fml14qeqyyrmqpv1u1t7k3kzaq94ptnd0qsc7p2eltzph4y892oinp0vz7n7xac3xsdu2ck5pe3kac9j3azgns6aawgfon31go3f89z0oum7pzptv65k9am18cct18r09x8zu3y3qpo3i9q52xkvv508669ksuf3ind17gjncolkreh2vnfoaursbtuj8b849jqsy0skuojmijwgs0yjne5swhjd0b2vo2j0z6qszy4izao5b5xpyzuyh2bj33zwelet8edyut4a6642hptev0cedesivxfy4d0wpmha8p == \d\e\q\2\x\f\w\s\1\q\8\i\q\z\9\y\f\v\g\3\9\1\k\2\5\w\z\3\3\s\9\k\o\s\f\7\5\v\f\x\7\c\6\i\t\f\m\1\e\8\3\0\a\8\w\c\f\o\v\a\g\s\2\4\6\1\c\z\y\l\2\5\p\b\g\k\3\0\y\b\6\3\l\i\n\j\5\t\7\4\p\b\k\y\3\g\b\2\0\l\7\x\4\d\9\l\2\1\l\f\2\c\9\s\7\m\q\u\j\b\s\y\j\u\0\r\4\6\a\7\k\b\n\3\i\8\i\3\u\h\u\f\y\j\t\g\h\b\i\c\x\u\f\x\c\i\b\l\4\j\3\1\b\1\7\4\y\4\t\j\f\p\k\g\c\9\o\s\v\d\d\z\v\z\a\q\p\2\n\c\r\7\u\w\w\i\c\e\5\f\g\c\u\j\k\7\p\s\x\w\m\0\6\7\f\m\l\1\4\q\e\q\y\y\r\m\q\p\v\1\u\1\t\7\k\3\k\z\a\q\9\4\p\t\n\d\0\q\s\c\7\p\2\e\l\t\z\p\h\4\y\8\9\2\o\i\n\p\0\v\z\7\n\7\x\a\c\3\x\s\d\u\2\c\k\5\p\e\3\k\a\c\9\j\3\a\z\g\n\s\6\a\a\w\g\f\o\n\3\1\g\o\3\f\8\9\z\0\o\u\m\7\p\z\p\t\v\6\5\k\9\a\m\1\8\c\c\t\1\8\r\0\9\x\8\z\u\3\y\3\q\p\o\3\i\9\q\5\2\x\k\v\v\5\0\8\6\6\9\k\s\u\f\3\i\n\d\1\7\g\j\n\c\o\l\k\r\e\h\2\v\n\f\o\a\u\r\s\b\t\u\j\8\b\8\4\9\j\q\s\y\0\s\k\u\o\j\m\i\j\w\g\s\0\y\j\n\e\5\s\w\h\j\d\0\b\2\v\o\2\j\0\z\6\q\s\z\y\4\i\z\a\o\5\b\5\x\p\y\z\u\y\h\2\b\j\3\3\z\w\e\l\e\t\8\e\d\y\u\t\4\a\6\6\4\2\h\p\t\e\v\0\c\e\d\e\s\i\v\x\f\y\4\d\0\w\p\m\h\a\8\p ]] 00:07:03.619 00:19:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:03.619 00:19:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:03.619 00:19:19 -- dd/common.sh@98 -- # xtrace_disable 00:07:03.619 00:19:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.619 00:19:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.619 00:19:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:03.878 [2024-09-29 00:19:19.508717] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:03.878 [2024-09-29 00:19:19.508818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58167 ] 00:07:03.878 [2024-09-29 00:19:19.644765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.878 [2024-09-29 00:19:19.692106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.136  Copying: 512/512 [B] (average 500 kBps) 00:07:04.136 00:07:04.136 00:19:19 -- dd/posix.sh@93 -- # [[ p08m72q6zfbch9ynmxumzjgavf857rdrvltm7cyhw86vhjsxt7kyp7hbpqpsy75fhp9lkrdgr3pa2kfnwuufavzxa5xomlx5xtzqfsgxy0mq0vk3d8ara8aud9t71ye97lfbk6f8bsuuwd62uptbi74kg514a91j4objtykg60henpy00srxizrk5bzp2aad31k7875twh7hqnzm1fg8hau18pru30knsl0yj43esppfymd8mflab64tojxmyy561mvpidpftd24b5oxw6qgmdbgt6sjn0loqlym2pmwqg27jpa4300resgume1oay1x49budp546wp16xjz39s2n0bkrtax1wcxvlov0hgpggpqikkf986i9k38cp51k6xo4gmcth8aq3zgp6i5t1grr9jelumgdtxgc1o7ip3v6h1puszzvuugx0002f9kx826utrp8zju3zqe7ubf9mypdkkexntv1cinic3ax11353irx7bmljdqaqufu5hheewl == \p\0\8\m\7\2\q\6\z\f\b\c\h\9\y\n\m\x\u\m\z\j\g\a\v\f\8\5\7\r\d\r\v\l\t\m\7\c\y\h\w\8\6\v\h\j\s\x\t\7\k\y\p\7\h\b\p\q\p\s\y\7\5\f\h\p\9\l\k\r\d\g\r\3\p\a\2\k\f\n\w\u\u\f\a\v\z\x\a\5\x\o\m\l\x\5\x\t\z\q\f\s\g\x\y\0\m\q\0\v\k\3\d\8\a\r\a\8\a\u\d\9\t\7\1\y\e\9\7\l\f\b\k\6\f\8\b\s\u\u\w\d\6\2\u\p\t\b\i\7\4\k\g\5\1\4\a\9\1\j\4\o\b\j\t\y\k\g\6\0\h\e\n\p\y\0\0\s\r\x\i\z\r\k\5\b\z\p\2\a\a\d\3\1\k\7\8\7\5\t\w\h\7\h\q\n\z\m\1\f\g\8\h\a\u\1\8\p\r\u\3\0\k\n\s\l\0\y\j\4\3\e\s\p\p\f\y\m\d\8\m\f\l\a\b\6\4\t\o\j\x\m\y\y\5\6\1\m\v\p\i\d\p\f\t\d\2\4\b\5\o\x\w\6\q\g\m\d\b\g\t\6\s\j\n\0\l\o\q\l\y\m\2\p\m\w\q\g\2\7\j\p\a\4\3\0\0\r\e\s\g\u\m\e\1\o\a\y\1\x\4\9\b\u\d\p\5\4\6\w\p\1\6\x\j\z\3\9\s\2\n\0\b\k\r\t\a\x\1\w\c\x\v\l\o\v\0\h\g\p\g\g\p\q\i\k\k\f\9\8\6\i\9\k\3\8\c\p\5\1\k\6\x\o\4\g\m\c\t\h\8\a\q\3\z\g\p\6\i\5\t\1\g\r\r\9\j\e\l\u\m\g\d\t\x\g\c\1\o\7\i\p\3\v\6\h\1\p\u\s\z\z\v\u\u\g\x\0\0\0\2\f\9\k\x\8\2\6\u\t\r\p\8\z\j\u\3\z\q\e\7\u\b\f\9\m\y\p\d\k\k\e\x\n\t\v\1\c\i\n\i\c\3\a\x\1\1\3\5\3\i\r\x\7\b\m\l\j\d\q\a\q\u\f\u\5\h\h\e\e\w\l ]] 00:07:04.136 00:19:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.136 00:19:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:04.136 [2024-09-29 00:19:19.970548] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.136 [2024-09-29 00:19:19.970648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58180 ] 00:07:04.395 [2024-09-29 00:19:20.108011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.395 [2024-09-29 00:19:20.153708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.653  Copying: 512/512 [B] (average 500 kBps) 00:07:04.653 00:07:04.653 00:19:20 -- dd/posix.sh@93 -- # [[ p08m72q6zfbch9ynmxumzjgavf857rdrvltm7cyhw86vhjsxt7kyp7hbpqpsy75fhp9lkrdgr3pa2kfnwuufavzxa5xomlx5xtzqfsgxy0mq0vk3d8ara8aud9t71ye97lfbk6f8bsuuwd62uptbi74kg514a91j4objtykg60henpy00srxizrk5bzp2aad31k7875twh7hqnzm1fg8hau18pru30knsl0yj43esppfymd8mflab64tojxmyy561mvpidpftd24b5oxw6qgmdbgt6sjn0loqlym2pmwqg27jpa4300resgume1oay1x49budp546wp16xjz39s2n0bkrtax1wcxvlov0hgpggpqikkf986i9k38cp51k6xo4gmcth8aq3zgp6i5t1grr9jelumgdtxgc1o7ip3v6h1puszzvuugx0002f9kx826utrp8zju3zqe7ubf9mypdkkexntv1cinic3ax11353irx7bmljdqaqufu5hheewl == \p\0\8\m\7\2\q\6\z\f\b\c\h\9\y\n\m\x\u\m\z\j\g\a\v\f\8\5\7\r\d\r\v\l\t\m\7\c\y\h\w\8\6\v\h\j\s\x\t\7\k\y\p\7\h\b\p\q\p\s\y\7\5\f\h\p\9\l\k\r\d\g\r\3\p\a\2\k\f\n\w\u\u\f\a\v\z\x\a\5\x\o\m\l\x\5\x\t\z\q\f\s\g\x\y\0\m\q\0\v\k\3\d\8\a\r\a\8\a\u\d\9\t\7\1\y\e\9\7\l\f\b\k\6\f\8\b\s\u\u\w\d\6\2\u\p\t\b\i\7\4\k\g\5\1\4\a\9\1\j\4\o\b\j\t\y\k\g\6\0\h\e\n\p\y\0\0\s\r\x\i\z\r\k\5\b\z\p\2\a\a\d\3\1\k\7\8\7\5\t\w\h\7\h\q\n\z\m\1\f\g\8\h\a\u\1\8\p\r\u\3\0\k\n\s\l\0\y\j\4\3\e\s\p\p\f\y\m\d\8\m\f\l\a\b\6\4\t\o\j\x\m\y\y\5\6\1\m\v\p\i\d\p\f\t\d\2\4\b\5\o\x\w\6\q\g\m\d\b\g\t\6\s\j\n\0\l\o\q\l\y\m\2\p\m\w\q\g\2\7\j\p\a\4\3\0\0\r\e\s\g\u\m\e\1\o\a\y\1\x\4\9\b\u\d\p\5\4\6\w\p\1\6\x\j\z\3\9\s\2\n\0\b\k\r\t\a\x\1\w\c\x\v\l\o\v\0\h\g\p\g\g\p\q\i\k\k\f\9\8\6\i\9\k\3\8\c\p\5\1\k\6\x\o\4\g\m\c\t\h\8\a\q\3\z\g\p\6\i\5\t\1\g\r\r\9\j\e\l\u\m\g\d\t\x\g\c\1\o\7\i\p\3\v\6\h\1\p\u\s\z\z\v\u\u\g\x\0\0\0\2\f\9\k\x\8\2\6\u\t\r\p\8\z\j\u\3\z\q\e\7\u\b\f\9\m\y\p\d\k\k\e\x\n\t\v\1\c\i\n\i\c\3\a\x\1\1\3\5\3\i\r\x\7\b\m\l\j\d\q\a\q\u\f\u\5\h\h\e\e\w\l ]] 00:07:04.653 00:19:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.653 00:19:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:04.653 [2024-09-29 00:19:20.435386] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:04.653 [2024-09-29 00:19:20.435480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58182 ] 00:07:04.912 [2024-09-29 00:19:20.578219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.912 [2024-09-29 00:19:20.661081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.170  Copying: 512/512 [B] (average 166 kBps) 00:07:05.170 00:07:05.170 00:19:20 -- dd/posix.sh@93 -- # [[ p08m72q6zfbch9ynmxumzjgavf857rdrvltm7cyhw86vhjsxt7kyp7hbpqpsy75fhp9lkrdgr3pa2kfnwuufavzxa5xomlx5xtzqfsgxy0mq0vk3d8ara8aud9t71ye97lfbk6f8bsuuwd62uptbi74kg514a91j4objtykg60henpy00srxizrk5bzp2aad31k7875twh7hqnzm1fg8hau18pru30knsl0yj43esppfymd8mflab64tojxmyy561mvpidpftd24b5oxw6qgmdbgt6sjn0loqlym2pmwqg27jpa4300resgume1oay1x49budp546wp16xjz39s2n0bkrtax1wcxvlov0hgpggpqikkf986i9k38cp51k6xo4gmcth8aq3zgp6i5t1grr9jelumgdtxgc1o7ip3v6h1puszzvuugx0002f9kx826utrp8zju3zqe7ubf9mypdkkexntv1cinic3ax11353irx7bmljdqaqufu5hheewl == \p\0\8\m\7\2\q\6\z\f\b\c\h\9\y\n\m\x\u\m\z\j\g\a\v\f\8\5\7\r\d\r\v\l\t\m\7\c\y\h\w\8\6\v\h\j\s\x\t\7\k\y\p\7\h\b\p\q\p\s\y\7\5\f\h\p\9\l\k\r\d\g\r\3\p\a\2\k\f\n\w\u\u\f\a\v\z\x\a\5\x\o\m\l\x\5\x\t\z\q\f\s\g\x\y\0\m\q\0\v\k\3\d\8\a\r\a\8\a\u\d\9\t\7\1\y\e\9\7\l\f\b\k\6\f\8\b\s\u\u\w\d\6\2\u\p\t\b\i\7\4\k\g\5\1\4\a\9\1\j\4\o\b\j\t\y\k\g\6\0\h\e\n\p\y\0\0\s\r\x\i\z\r\k\5\b\z\p\2\a\a\d\3\1\k\7\8\7\5\t\w\h\7\h\q\n\z\m\1\f\g\8\h\a\u\1\8\p\r\u\3\0\k\n\s\l\0\y\j\4\3\e\s\p\p\f\y\m\d\8\m\f\l\a\b\6\4\t\o\j\x\m\y\y\5\6\1\m\v\p\i\d\p\f\t\d\2\4\b\5\o\x\w\6\q\g\m\d\b\g\t\6\s\j\n\0\l\o\q\l\y\m\2\p\m\w\q\g\2\7\j\p\a\4\3\0\0\r\e\s\g\u\m\e\1\o\a\y\1\x\4\9\b\u\d\p\5\4\6\w\p\1\6\x\j\z\3\9\s\2\n\0\b\k\r\t\a\x\1\w\c\x\v\l\o\v\0\h\g\p\g\g\p\q\i\k\k\f\9\8\6\i\9\k\3\8\c\p\5\1\k\6\x\o\4\g\m\c\t\h\8\a\q\3\z\g\p\6\i\5\t\1\g\r\r\9\j\e\l\u\m\g\d\t\x\g\c\1\o\7\i\p\3\v\6\h\1\p\u\s\z\z\v\u\u\g\x\0\0\0\2\f\9\k\x\8\2\6\u\t\r\p\8\z\j\u\3\z\q\e\7\u\b\f\9\m\y\p\d\k\k\e\x\n\t\v\1\c\i\n\i\c\3\a\x\1\1\3\5\3\i\r\x\7\b\m\l\j\d\q\a\q\u\f\u\5\h\h\e\e\w\l ]] 00:07:05.170 00:19:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.170 00:19:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:05.170 [2024-09-29 00:19:20.982638] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.170 [2024-09-29 00:19:20.982756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58195 ] 00:07:05.429 [2024-09-29 00:19:21.123987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.429 [2024-09-29 00:19:21.174665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.688  Copying: 512/512 [B] (average 500 kBps) 00:07:05.688 00:07:05.688 00:19:21 -- dd/posix.sh@93 -- # [[ p08m72q6zfbch9ynmxumzjgavf857rdrvltm7cyhw86vhjsxt7kyp7hbpqpsy75fhp9lkrdgr3pa2kfnwuufavzxa5xomlx5xtzqfsgxy0mq0vk3d8ara8aud9t71ye97lfbk6f8bsuuwd62uptbi74kg514a91j4objtykg60henpy00srxizrk5bzp2aad31k7875twh7hqnzm1fg8hau18pru30knsl0yj43esppfymd8mflab64tojxmyy561mvpidpftd24b5oxw6qgmdbgt6sjn0loqlym2pmwqg27jpa4300resgume1oay1x49budp546wp16xjz39s2n0bkrtax1wcxvlov0hgpggpqikkf986i9k38cp51k6xo4gmcth8aq3zgp6i5t1grr9jelumgdtxgc1o7ip3v6h1puszzvuugx0002f9kx826utrp8zju3zqe7ubf9mypdkkexntv1cinic3ax11353irx7bmljdqaqufu5hheewl == \p\0\8\m\7\2\q\6\z\f\b\c\h\9\y\n\m\x\u\m\z\j\g\a\v\f\8\5\7\r\d\r\v\l\t\m\7\c\y\h\w\8\6\v\h\j\s\x\t\7\k\y\p\7\h\b\p\q\p\s\y\7\5\f\h\p\9\l\k\r\d\g\r\3\p\a\2\k\f\n\w\u\u\f\a\v\z\x\a\5\x\o\m\l\x\5\x\t\z\q\f\s\g\x\y\0\m\q\0\v\k\3\d\8\a\r\a\8\a\u\d\9\t\7\1\y\e\9\7\l\f\b\k\6\f\8\b\s\u\u\w\d\6\2\u\p\t\b\i\7\4\k\g\5\1\4\a\9\1\j\4\o\b\j\t\y\k\g\6\0\h\e\n\p\y\0\0\s\r\x\i\z\r\k\5\b\z\p\2\a\a\d\3\1\k\7\8\7\5\t\w\h\7\h\q\n\z\m\1\f\g\8\h\a\u\1\8\p\r\u\3\0\k\n\s\l\0\y\j\4\3\e\s\p\p\f\y\m\d\8\m\f\l\a\b\6\4\t\o\j\x\m\y\y\5\6\1\m\v\p\i\d\p\f\t\d\2\4\b\5\o\x\w\6\q\g\m\d\b\g\t\6\s\j\n\0\l\o\q\l\y\m\2\p\m\w\q\g\2\7\j\p\a\4\3\0\0\r\e\s\g\u\m\e\1\o\a\y\1\x\4\9\b\u\d\p\5\4\6\w\p\1\6\x\j\z\3\9\s\2\n\0\b\k\r\t\a\x\1\w\c\x\v\l\o\v\0\h\g\p\g\g\p\q\i\k\k\f\9\8\6\i\9\k\3\8\c\p\5\1\k\6\x\o\4\g\m\c\t\h\8\a\q\3\z\g\p\6\i\5\t\1\g\r\r\9\j\e\l\u\m\g\d\t\x\g\c\1\o\7\i\p\3\v\6\h\1\p\u\s\z\z\v\u\u\g\x\0\0\0\2\f\9\k\x\8\2\6\u\t\r\p\8\z\j\u\3\z\q\e\7\u\b\f\9\m\y\p\d\k\k\e\x\n\t\v\1\c\i\n\i\c\3\a\x\1\1\3\5\3\i\r\x\7\b\m\l\j\d\q\a\q\u\f\u\5\h\h\e\e\w\l ]] 00:07:05.688 00:07:05.688 real 0m3.878s 00:07:05.688 user 0m2.138s 00:07:05.688 sys 0m0.758s 00:07:05.688 ************************************ 00:07:05.688 END TEST dd_flags_misc 00:07:05.688 ************************************ 00:07:05.688 00:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.688 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 00:19:21 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:05.688 00:19:21 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:05.688 * Second test run, disabling liburing, forcing AIO 00:07:05.688 00:19:21 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:05.688 00:19:21 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:05.688 00:19:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:05.688 00:19:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.688 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 ************************************ 00:07:05.688 START TEST dd_flag_append_forced_aio 00:07:05.688 ************************************ 00:07:05.688 00:19:21 -- common/autotest_common.sh@1104 -- # append 00:07:05.688 00:19:21 -- dd/posix.sh@16 -- # local dump0 00:07:05.688 00:19:21 -- dd/posix.sh@17 -- # local dump1 00:07:05.688 00:19:21 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:05.688 00:19:21 -- dd/common.sh@98 -- # xtrace_disable 00:07:05.688 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 00:19:21 -- dd/posix.sh@19 -- # dump0=7d14sjc0t1omhqozbqlyyexyurzaxo85 00:07:05.688 00:19:21 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:05.688 00:19:21 -- dd/common.sh@98 -- # xtrace_disable 00:07:05.688 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:07:05.688 00:19:21 -- dd/posix.sh@20 -- # dump1=bcmr9upavo1l7a7ng19xonthxf2qfccx 00:07:05.688 00:19:21 -- dd/posix.sh@22 -- # printf %s 7d14sjc0t1omhqozbqlyyexyurzaxo85 00:07:05.688 00:19:21 -- dd/posix.sh@23 -- # printf %s bcmr9upavo1l7a7ng19xonthxf2qfccx 00:07:05.688 00:19:21 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:05.688 [2024-09-29 00:19:21.511473] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:05.688 [2024-09-29 00:19:21.511578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:07:05.947 [2024-09-29 00:19:21.649058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.947 [2024-09-29 00:19:21.699567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.207  Copying: 32/32 [B] (average 31 kBps) 00:07:06.207 00:07:06.207 00:19:21 -- dd/posix.sh@27 -- # [[ bcmr9upavo1l7a7ng19xonthxf2qfccx7d14sjc0t1omhqozbqlyyexyurzaxo85 == \b\c\m\r\9\u\p\a\v\o\1\l\7\a\7\n\g\1\9\x\o\n\t\h\x\f\2\q\f\c\c\x\7\d\1\4\s\j\c\0\t\1\o\m\h\q\o\z\b\q\l\y\y\e\x\y\u\r\z\a\x\o\8\5 ]] 00:07:06.207 00:07:06.207 real 0m0.474s 00:07:06.207 user 0m0.256s 00:07:06.207 sys 0m0.098s 00:07:06.207 ************************************ 00:07:06.207 END TEST dd_flag_append_forced_aio 00:07:06.207 ************************************ 00:07:06.207 00:19:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.207 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:07:06.207 00:19:21 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:06.207 00:19:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.207 00:19:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.207 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:07:06.207 ************************************ 00:07:06.207 START TEST dd_flag_directory_forced_aio 00:07:06.207 ************************************ 00:07:06.207 00:19:21 -- common/autotest_common.sh@1104 -- # directory 00:07:06.207 00:19:21 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.207 00:19:21 -- common/autotest_common.sh@640 -- # local es=0 00:07:06.207 00:19:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.207 00:19:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.207 00:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.207 00:19:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.207 00:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.207 00:19:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.207 00:19:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.207 00:19:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.207 00:19:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.207 00:19:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.207 [2024-09-29 00:19:22.031968] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.207 [2024-09-29 00:19:22.032068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:07:06.467 [2024-09-29 00:19:22.171531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.467 [2024-09-29 00:19:22.239436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.467 [2024-09-29 00:19:22.294145] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.467 [2024-09-29 00:19:22.294208] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.467 [2024-09-29 00:19:22.294224] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.726 [2024-09-29 00:19:22.363454] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:06.726 00:19:22 -- common/autotest_common.sh@643 -- # es=236 00:07:06.726 00:19:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:06.726 00:19:22 -- common/autotest_common.sh@652 -- # es=108 00:07:06.726 00:19:22 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:06.726 00:19:22 -- common/autotest_common.sh@660 -- # es=1 00:07:06.726 00:19:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:06.726 00:19:22 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.726 00:19:22 -- common/autotest_common.sh@640 -- # local es=0 00:07:06.726 00:19:22 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.726 00:19:22 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.726 00:19:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.726 00:19:22 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.726 00:19:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.726 00:19:22 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.726 00:19:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:06.726 00:19:22 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.726 00:19:22 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.726 00:19:22 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:06.726 [2024-09-29 00:19:22.515560] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:06.726 [2024-09-29 00:19:22.515648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58252 ] 00:07:06.986 [2024-09-29 00:19:22.651900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.986 [2024-09-29 00:19:22.731312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.986 [2024-09-29 00:19:22.785423] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.986 [2024-09-29 00:19:22.785483] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.986 [2024-09-29 00:19:22.785500] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.246 [2024-09-29 00:19:22.858520] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:07.246 00:19:22 -- common/autotest_common.sh@643 -- # es=236 00:07:07.246 00:19:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:07.246 00:19:22 -- common/autotest_common.sh@652 -- # es=108 00:07:07.246 00:19:22 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:07.246 00:19:22 -- common/autotest_common.sh@660 -- # es=1 00:07:07.246 00:19:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:07.246 00:07:07.246 real 0m0.999s 00:07:07.246 user 0m0.590s 00:07:07.246 sys 0m0.200s 00:07:07.246 00:19:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.246 00:19:22 -- common/autotest_common.sh@10 -- # set +x 00:07:07.246 ************************************ 00:07:07.246 END TEST dd_flag_directory_forced_aio 00:07:07.246 ************************************ 00:07:07.246 00:19:23 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:07.246 00:19:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.246 00:19:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.246 00:19:23 -- common/autotest_common.sh@10 -- # set +x 00:07:07.246 ************************************ 00:07:07.246 START TEST dd_flag_nofollow_forced_aio 00:07:07.246 ************************************ 00:07:07.246 00:19:23 -- common/autotest_common.sh@1104 -- # nofollow 00:07:07.246 00:19:23 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.246 00:19:23 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.246 00:19:23 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:07.246 00:19:23 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:07.246 00:19:23 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.246 00:19:23 -- common/autotest_common.sh@640 -- # local es=0 00:07:07.246 00:19:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.246 00:19:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.246 00:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.246 00:19:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.246 00:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.246 00:19:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.246 00:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.246 00:19:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.246 00:19:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.246 00:19:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.246 [2024-09-29 00:19:23.090378] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:07.246 [2024-09-29 00:19:23.090480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58286 ] 00:07:07.506 [2024-09-29 00:19:23.228492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.506 [2024-09-29 00:19:23.291494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.506 [2024-09-29 00:19:23.345141] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:07.506 [2024-09-29 00:19:23.345228] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:07.506 [2024-09-29 00:19:23.345243] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.765 [2024-09-29 00:19:23.418141] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:07.765 00:19:23 -- common/autotest_common.sh@643 -- # es=216 00:07:07.765 00:19:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:07.765 00:19:23 -- common/autotest_common.sh@652 -- # es=88 00:07:07.765 00:19:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:07.765 00:19:23 -- common/autotest_common.sh@660 -- # es=1 00:07:07.765 00:19:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:07.765 00:19:23 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.765 00:19:23 -- common/autotest_common.sh@640 -- # local es=0 00:07:07.765 00:19:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.765 00:19:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.765 00:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.765 00:19:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.765 00:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.765 00:19:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.765 00:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:07.765 00:19:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.765 00:19:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.765 00:19:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:07.765 [2024-09-29 00:19:23.589698] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:07.765 [2024-09-29 00:19:23.589822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:07:08.028 [2024-09-29 00:19:23.724634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.028 [2024-09-29 00:19:23.787902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.028 [2024-09-29 00:19:23.832754] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.028 [2024-09-29 00:19:23.832819] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:08.028 [2024-09-29 00:19:23.832848] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.288 [2024-09-29 00:19:23.894033] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:08.288 00:19:23 -- common/autotest_common.sh@643 -- # es=216 00:07:08.288 00:19:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:08.288 00:19:23 -- common/autotest_common.sh@652 -- # es=88 00:07:08.288 00:19:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:08.288 00:19:23 -- common/autotest_common.sh@660 -- # es=1 00:07:08.288 00:19:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:08.288 00:19:23 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:08.288 00:19:23 -- dd/common.sh@98 -- # xtrace_disable 00:07:08.288 00:19:23 -- common/autotest_common.sh@10 -- # set +x 00:07:08.288 00:19:23 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.288 [2024-09-29 00:19:24.045536] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:08.288 [2024-09-29 00:19:24.045635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:07:08.548 [2024-09-29 00:19:24.180833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.548 [2024-09-29 00:19:24.230774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.807  Copying: 512/512 [B] (average 500 kBps) 00:07:08.807 00:07:08.807 00:19:24 -- dd/posix.sh@49 -- # [[ hfagq1dpsro99wo1cpp0bvemkcsmg42muvoo0tyuwzgumnitzn8wavmaavisjqpew076uzdvgj2ym3eja7r60j51ay4tjm757i2f9pbhfoj82a8wvvlr3iqm5kaiom9n83h5lkwiv67asql5oytb3icjn74tjdvaus56l4h4jxvma6vakcvspebcrmhmg5j33gqlycfrxvjeaa720g21q1hn7e6kynazt87m5zhjj1idy2ngfg1hkmeg4evzi047au4lk6hyojifzdv6jv9vscovktrz3die41v5kxxx5odxzwbyvm9hnbbektrdyh55ncqpjyxsngutt14crb0pezlsm5df9kc0abusx8o03b2xsetzjgwyhndsm6iwyyb5v9gxobaz0sh4lymq3t5hvrxkbgbnl5beinsg0xfg2j0qlntjxklsvt8brqsfetz072vxh7k1khkhlbrp9wbsj70r5pizm0eeo4t3nuq0c4n62t9gtrlu6h9u8b82049l == \h\f\a\g\q\1\d\p\s\r\o\9\9\w\o\1\c\p\p\0\b\v\e\m\k\c\s\m\g\4\2\m\u\v\o\o\0\t\y\u\w\z\g\u\m\n\i\t\z\n\8\w\a\v\m\a\a\v\i\s\j\q\p\e\w\0\7\6\u\z\d\v\g\j\2\y\m\3\e\j\a\7\r\6\0\j\5\1\a\y\4\t\j\m\7\5\7\i\2\f\9\p\b\h\f\o\j\8\2\a\8\w\v\v\l\r\3\i\q\m\5\k\a\i\o\m\9\n\8\3\h\5\l\k\w\i\v\6\7\a\s\q\l\5\o\y\t\b\3\i\c\j\n\7\4\t\j\d\v\a\u\s\5\6\l\4\h\4\j\x\v\m\a\6\v\a\k\c\v\s\p\e\b\c\r\m\h\m\g\5\j\3\3\g\q\l\y\c\f\r\x\v\j\e\a\a\7\2\0\g\2\1\q\1\h\n\7\e\6\k\y\n\a\z\t\8\7\m\5\z\h\j\j\1\i\d\y\2\n\g\f\g\1\h\k\m\e\g\4\e\v\z\i\0\4\7\a\u\4\l\k\6\h\y\o\j\i\f\z\d\v\6\j\v\9\v\s\c\o\v\k\t\r\z\3\d\i\e\4\1\v\5\k\x\x\x\5\o\d\x\z\w\b\y\v\m\9\h\n\b\b\e\k\t\r\d\y\h\5\5\n\c\q\p\j\y\x\s\n\g\u\t\t\1\4\c\r\b\0\p\e\z\l\s\m\5\d\f\9\k\c\0\a\b\u\s\x\8\o\0\3\b\2\x\s\e\t\z\j\g\w\y\h\n\d\s\m\6\i\w\y\y\b\5\v\9\g\x\o\b\a\z\0\s\h\4\l\y\m\q\3\t\5\h\v\r\x\k\b\g\b\n\l\5\b\e\i\n\s\g\0\x\f\g\2\j\0\q\l\n\t\j\x\k\l\s\v\t\8\b\r\q\s\f\e\t\z\0\7\2\v\x\h\7\k\1\k\h\k\h\l\b\r\p\9\w\b\s\j\7\0\r\5\p\i\z\m\0\e\e\o\4\t\3\n\u\q\0\c\4\n\6\2\t\9\g\t\r\l\u\6\h\9\u\8\b\8\2\0\4\9\l ]] 00:07:08.807 00:07:08.807 real 0m1.428s 00:07:08.807 user 0m0.814s 00:07:08.807 sys 0m0.284s 00:07:08.807 ************************************ 00:07:08.807 END TEST dd_flag_nofollow_forced_aio 00:07:08.807 ************************************ 00:07:08.807 00:19:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.807 00:19:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.807 00:19:24 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:08.807 00:19:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.807 00:19:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.807 00:19:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.807 ************************************ 00:07:08.807 START TEST dd_flag_noatime_forced_aio 00:07:08.807 ************************************ 00:07:08.807 00:19:24 -- common/autotest_common.sh@1104 -- # noatime 00:07:08.807 00:19:24 -- dd/posix.sh@53 -- # local atime_if 00:07:08.807 00:19:24 -- dd/posix.sh@54 -- # local atime_of 00:07:08.807 00:19:24 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:08.807 00:19:24 -- dd/common.sh@98 -- # xtrace_disable 00:07:08.807 00:19:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.807 00:19:24 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.807 00:19:24 -- dd/posix.sh@60 -- # atime_if=1727569164 00:07:08.807 00:19:24 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.807 00:19:24 -- dd/posix.sh@61 -- # atime_of=1727569164 00:07:08.807 00:19:24 -- dd/posix.sh@66 -- # sleep 1 00:07:09.761 00:19:25 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.761 [2024-09-29 00:19:25.585294] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:09.761 [2024-09-29 00:19:25.585421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58338 ] 00:07:10.030 [2024-09-29 00:19:25.724284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.030 [2024-09-29 00:19:25.797616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.289  Copying: 512/512 [B] (average 500 kBps) 00:07:10.289 00:07:10.289 00:19:26 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.289 00:19:26 -- dd/posix.sh@69 -- # (( atime_if == 1727569164 )) 00:07:10.289 00:19:26 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.289 00:19:26 -- dd/posix.sh@70 -- # (( atime_of == 1727569164 )) 00:07:10.289 00:19:26 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.289 [2024-09-29 00:19:26.127772] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:10.289 [2024-09-29 00:19:26.127906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58350 ] 00:07:10.549 [2024-09-29 00:19:26.262420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.549 [2024-09-29 00:19:26.313080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.808  Copying: 512/512 [B] (average 500 kBps) 00:07:10.808 00:07:10.808 00:19:26 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.808 00:19:26 -- dd/posix.sh@73 -- # (( atime_if < 1727569166 )) 00:07:10.808 00:07:10.808 real 0m2.037s 00:07:10.808 user 0m0.560s 00:07:10.808 sys 0m0.233s 00:07:10.808 00:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.808 00:19:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.808 ************************************ 00:07:10.808 END TEST dd_flag_noatime_forced_aio 00:07:10.808 ************************************ 00:07:10.808 00:19:26 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:10.808 00:19:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.808 00:19:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.809 00:19:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.809 ************************************ 00:07:10.809 START TEST dd_flags_misc_forced_aio 00:07:10.809 ************************************ 00:07:10.809 00:19:26 -- common/autotest_common.sh@1104 -- # io 00:07:10.809 00:19:26 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:10.809 00:19:26 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:10.809 00:19:26 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:10.809 00:19:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:10.809 00:19:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:10.809 00:19:26 -- dd/common.sh@98 -- # xtrace_disable 00:07:10.809 00:19:26 -- common/autotest_common.sh@10 -- # set +x 00:07:10.809 00:19:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.809 00:19:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:11.068 [2024-09-29 00:19:26.659074] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.068 [2024-09-29 00:19:26.659208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58376 ] 00:07:11.068 [2024-09-29 00:19:26.796588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.068 [2024-09-29 00:19:26.846897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.328  Copying: 512/512 [B] (average 500 kBps) 00:07:11.328 00:07:11.328 00:19:27 -- dd/posix.sh@93 -- # [[ b64h38833vmb3nkkk1330ccuw0ttmlpmascyy38o672ia40fpgytf1avxxd0nzbl4ijs494czbxinnpic9rv8smivw3zfha8yrtcvechahg82eyplvlgt6wcb8o47e58demvniokn1af37yaf4xp9gkacw8k9dpkysyu49xytc47s47sar40420l36s74ud288qqlfrclta4bzlyxmqzjg3nd3o5ehkp8fw1zdrncm3yn57x9f6bkgvkta2uhg3ar3du3gwhzap18vrqk9uinlyao8pvbflu5nikyhrlmp5vi37dnkjr2fofz7se0loq75gf6zuo0psg47b5qzdop5jdgp1aputcxf2qrji301kddk3rx5kf497y13tkfnpvx2p9ls5thvu7q1v658l0hqfdiiujab1bgmf4qnfx7i2wswm4pdkrh23zh0jc3hconbv9l2t1bmq75rndmngv9iid2xugt7fw4ynid4zhp689kiwe6og8wsm60slpq8it == \b\6\4\h\3\8\8\3\3\v\m\b\3\n\k\k\k\1\3\3\0\c\c\u\w\0\t\t\m\l\p\m\a\s\c\y\y\3\8\o\6\7\2\i\a\4\0\f\p\g\y\t\f\1\a\v\x\x\d\0\n\z\b\l\4\i\j\s\4\9\4\c\z\b\x\i\n\n\p\i\c\9\r\v\8\s\m\i\v\w\3\z\f\h\a\8\y\r\t\c\v\e\c\h\a\h\g\8\2\e\y\p\l\v\l\g\t\6\w\c\b\8\o\4\7\e\5\8\d\e\m\v\n\i\o\k\n\1\a\f\3\7\y\a\f\4\x\p\9\g\k\a\c\w\8\k\9\d\p\k\y\s\y\u\4\9\x\y\t\c\4\7\s\4\7\s\a\r\4\0\4\2\0\l\3\6\s\7\4\u\d\2\8\8\q\q\l\f\r\c\l\t\a\4\b\z\l\y\x\m\q\z\j\g\3\n\d\3\o\5\e\h\k\p\8\f\w\1\z\d\r\n\c\m\3\y\n\5\7\x\9\f\6\b\k\g\v\k\t\a\2\u\h\g\3\a\r\3\d\u\3\g\w\h\z\a\p\1\8\v\r\q\k\9\u\i\n\l\y\a\o\8\p\v\b\f\l\u\5\n\i\k\y\h\r\l\m\p\5\v\i\3\7\d\n\k\j\r\2\f\o\f\z\7\s\e\0\l\o\q\7\5\g\f\6\z\u\o\0\p\s\g\4\7\b\5\q\z\d\o\p\5\j\d\g\p\1\a\p\u\t\c\x\f\2\q\r\j\i\3\0\1\k\d\d\k\3\r\x\5\k\f\4\9\7\y\1\3\t\k\f\n\p\v\x\2\p\9\l\s\5\t\h\v\u\7\q\1\v\6\5\8\l\0\h\q\f\d\i\i\u\j\a\b\1\b\g\m\f\4\q\n\f\x\7\i\2\w\s\w\m\4\p\d\k\r\h\2\3\z\h\0\j\c\3\h\c\o\n\b\v\9\l\2\t\1\b\m\q\7\5\r\n\d\m\n\g\v\9\i\i\d\2\x\u\g\t\7\f\w\4\y\n\i\d\4\z\h\p\6\8\9\k\i\w\e\6\o\g\8\w\s\m\6\0\s\l\p\q\8\i\t ]] 00:07:11.328 00:19:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.328 00:19:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:11.328 [2024-09-29 00:19:27.123999] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.328 [2024-09-29 00:19:27.124091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58384 ] 00:07:11.587 [2024-09-29 00:19:27.258678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.587 [2024-09-29 00:19:27.306718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.846  Copying: 512/512 [B] (average 500 kBps) 00:07:11.846 00:07:11.847 00:19:27 -- dd/posix.sh@93 -- # [[ b64h38833vmb3nkkk1330ccuw0ttmlpmascyy38o672ia40fpgytf1avxxd0nzbl4ijs494czbxinnpic9rv8smivw3zfha8yrtcvechahg82eyplvlgt6wcb8o47e58demvniokn1af37yaf4xp9gkacw8k9dpkysyu49xytc47s47sar40420l36s74ud288qqlfrclta4bzlyxmqzjg3nd3o5ehkp8fw1zdrncm3yn57x9f6bkgvkta2uhg3ar3du3gwhzap18vrqk9uinlyao8pvbflu5nikyhrlmp5vi37dnkjr2fofz7se0loq75gf6zuo0psg47b5qzdop5jdgp1aputcxf2qrji301kddk3rx5kf497y13tkfnpvx2p9ls5thvu7q1v658l0hqfdiiujab1bgmf4qnfx7i2wswm4pdkrh23zh0jc3hconbv9l2t1bmq75rndmngv9iid2xugt7fw4ynid4zhp689kiwe6og8wsm60slpq8it == \b\6\4\h\3\8\8\3\3\v\m\b\3\n\k\k\k\1\3\3\0\c\c\u\w\0\t\t\m\l\p\m\a\s\c\y\y\3\8\o\6\7\2\i\a\4\0\f\p\g\y\t\f\1\a\v\x\x\d\0\n\z\b\l\4\i\j\s\4\9\4\c\z\b\x\i\n\n\p\i\c\9\r\v\8\s\m\i\v\w\3\z\f\h\a\8\y\r\t\c\v\e\c\h\a\h\g\8\2\e\y\p\l\v\l\g\t\6\w\c\b\8\o\4\7\e\5\8\d\e\m\v\n\i\o\k\n\1\a\f\3\7\y\a\f\4\x\p\9\g\k\a\c\w\8\k\9\d\p\k\y\s\y\u\4\9\x\y\t\c\4\7\s\4\7\s\a\r\4\0\4\2\0\l\3\6\s\7\4\u\d\2\8\8\q\q\l\f\r\c\l\t\a\4\b\z\l\y\x\m\q\z\j\g\3\n\d\3\o\5\e\h\k\p\8\f\w\1\z\d\r\n\c\m\3\y\n\5\7\x\9\f\6\b\k\g\v\k\t\a\2\u\h\g\3\a\r\3\d\u\3\g\w\h\z\a\p\1\8\v\r\q\k\9\u\i\n\l\y\a\o\8\p\v\b\f\l\u\5\n\i\k\y\h\r\l\m\p\5\v\i\3\7\d\n\k\j\r\2\f\o\f\z\7\s\e\0\l\o\q\7\5\g\f\6\z\u\o\0\p\s\g\4\7\b\5\q\z\d\o\p\5\j\d\g\p\1\a\p\u\t\c\x\f\2\q\r\j\i\3\0\1\k\d\d\k\3\r\x\5\k\f\4\9\7\y\1\3\t\k\f\n\p\v\x\2\p\9\l\s\5\t\h\v\u\7\q\1\v\6\5\8\l\0\h\q\f\d\i\i\u\j\a\b\1\b\g\m\f\4\q\n\f\x\7\i\2\w\s\w\m\4\p\d\k\r\h\2\3\z\h\0\j\c\3\h\c\o\n\b\v\9\l\2\t\1\b\m\q\7\5\r\n\d\m\n\g\v\9\i\i\d\2\x\u\g\t\7\f\w\4\y\n\i\d\4\z\h\p\6\8\9\k\i\w\e\6\o\g\8\w\s\m\6\0\s\l\p\q\8\i\t ]] 00:07:11.847 00:19:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.847 00:19:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:11.847 [2024-09-29 00:19:27.575619] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:11.847 [2024-09-29 00:19:27.575711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58391 ] 00:07:12.106 [2024-09-29 00:19:27.713672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.106 [2024-09-29 00:19:27.762209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.365  Copying: 512/512 [B] (average 125 kBps) 00:07:12.365 00:07:12.365 00:19:28 -- dd/posix.sh@93 -- # [[ b64h38833vmb3nkkk1330ccuw0ttmlpmascyy38o672ia40fpgytf1avxxd0nzbl4ijs494czbxinnpic9rv8smivw3zfha8yrtcvechahg82eyplvlgt6wcb8o47e58demvniokn1af37yaf4xp9gkacw8k9dpkysyu49xytc47s47sar40420l36s74ud288qqlfrclta4bzlyxmqzjg3nd3o5ehkp8fw1zdrncm3yn57x9f6bkgvkta2uhg3ar3du3gwhzap18vrqk9uinlyao8pvbflu5nikyhrlmp5vi37dnkjr2fofz7se0loq75gf6zuo0psg47b5qzdop5jdgp1aputcxf2qrji301kddk3rx5kf497y13tkfnpvx2p9ls5thvu7q1v658l0hqfdiiujab1bgmf4qnfx7i2wswm4pdkrh23zh0jc3hconbv9l2t1bmq75rndmngv9iid2xugt7fw4ynid4zhp689kiwe6og8wsm60slpq8it == \b\6\4\h\3\8\8\3\3\v\m\b\3\n\k\k\k\1\3\3\0\c\c\u\w\0\t\t\m\l\p\m\a\s\c\y\y\3\8\o\6\7\2\i\a\4\0\f\p\g\y\t\f\1\a\v\x\x\d\0\n\z\b\l\4\i\j\s\4\9\4\c\z\b\x\i\n\n\p\i\c\9\r\v\8\s\m\i\v\w\3\z\f\h\a\8\y\r\t\c\v\e\c\h\a\h\g\8\2\e\y\p\l\v\l\g\t\6\w\c\b\8\o\4\7\e\5\8\d\e\m\v\n\i\o\k\n\1\a\f\3\7\y\a\f\4\x\p\9\g\k\a\c\w\8\k\9\d\p\k\y\s\y\u\4\9\x\y\t\c\4\7\s\4\7\s\a\r\4\0\4\2\0\l\3\6\s\7\4\u\d\2\8\8\q\q\l\f\r\c\l\t\a\4\b\z\l\y\x\m\q\z\j\g\3\n\d\3\o\5\e\h\k\p\8\f\w\1\z\d\r\n\c\m\3\y\n\5\7\x\9\f\6\b\k\g\v\k\t\a\2\u\h\g\3\a\r\3\d\u\3\g\w\h\z\a\p\1\8\v\r\q\k\9\u\i\n\l\y\a\o\8\p\v\b\f\l\u\5\n\i\k\y\h\r\l\m\p\5\v\i\3\7\d\n\k\j\r\2\f\o\f\z\7\s\e\0\l\o\q\7\5\g\f\6\z\u\o\0\p\s\g\4\7\b\5\q\z\d\o\p\5\j\d\g\p\1\a\p\u\t\c\x\f\2\q\r\j\i\3\0\1\k\d\d\k\3\r\x\5\k\f\4\9\7\y\1\3\t\k\f\n\p\v\x\2\p\9\l\s\5\t\h\v\u\7\q\1\v\6\5\8\l\0\h\q\f\d\i\i\u\j\a\b\1\b\g\m\f\4\q\n\f\x\7\i\2\w\s\w\m\4\p\d\k\r\h\2\3\z\h\0\j\c\3\h\c\o\n\b\v\9\l\2\t\1\b\m\q\7\5\r\n\d\m\n\g\v\9\i\i\d\2\x\u\g\t\7\f\w\4\y\n\i\d\4\z\h\p\6\8\9\k\i\w\e\6\o\g\8\w\s\m\6\0\s\l\p\q\8\i\t ]] 00:07:12.365 00:19:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.365 00:19:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:12.365 [2024-09-29 00:19:28.067943] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.365 [2024-09-29 00:19:28.068038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58399 ] 00:07:12.365 [2024-09-29 00:19:28.206113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.624 [2024-09-29 00:19:28.270403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.885  Copying: 512/512 [B] (average 250 kBps) 00:07:12.885 00:07:12.885 00:19:28 -- dd/posix.sh@93 -- # [[ b64h38833vmb3nkkk1330ccuw0ttmlpmascyy38o672ia40fpgytf1avxxd0nzbl4ijs494czbxinnpic9rv8smivw3zfha8yrtcvechahg82eyplvlgt6wcb8o47e58demvniokn1af37yaf4xp9gkacw8k9dpkysyu49xytc47s47sar40420l36s74ud288qqlfrclta4bzlyxmqzjg3nd3o5ehkp8fw1zdrncm3yn57x9f6bkgvkta2uhg3ar3du3gwhzap18vrqk9uinlyao8pvbflu5nikyhrlmp5vi37dnkjr2fofz7se0loq75gf6zuo0psg47b5qzdop5jdgp1aputcxf2qrji301kddk3rx5kf497y13tkfnpvx2p9ls5thvu7q1v658l0hqfdiiujab1bgmf4qnfx7i2wswm4pdkrh23zh0jc3hconbv9l2t1bmq75rndmngv9iid2xugt7fw4ynid4zhp689kiwe6og8wsm60slpq8it == \b\6\4\h\3\8\8\3\3\v\m\b\3\n\k\k\k\1\3\3\0\c\c\u\w\0\t\t\m\l\p\m\a\s\c\y\y\3\8\o\6\7\2\i\a\4\0\f\p\g\y\t\f\1\a\v\x\x\d\0\n\z\b\l\4\i\j\s\4\9\4\c\z\b\x\i\n\n\p\i\c\9\r\v\8\s\m\i\v\w\3\z\f\h\a\8\y\r\t\c\v\e\c\h\a\h\g\8\2\e\y\p\l\v\l\g\t\6\w\c\b\8\o\4\7\e\5\8\d\e\m\v\n\i\o\k\n\1\a\f\3\7\y\a\f\4\x\p\9\g\k\a\c\w\8\k\9\d\p\k\y\s\y\u\4\9\x\y\t\c\4\7\s\4\7\s\a\r\4\0\4\2\0\l\3\6\s\7\4\u\d\2\8\8\q\q\l\f\r\c\l\t\a\4\b\z\l\y\x\m\q\z\j\g\3\n\d\3\o\5\e\h\k\p\8\f\w\1\z\d\r\n\c\m\3\y\n\5\7\x\9\f\6\b\k\g\v\k\t\a\2\u\h\g\3\a\r\3\d\u\3\g\w\h\z\a\p\1\8\v\r\q\k\9\u\i\n\l\y\a\o\8\p\v\b\f\l\u\5\n\i\k\y\h\r\l\m\p\5\v\i\3\7\d\n\k\j\r\2\f\o\f\z\7\s\e\0\l\o\q\7\5\g\f\6\z\u\o\0\p\s\g\4\7\b\5\q\z\d\o\p\5\j\d\g\p\1\a\p\u\t\c\x\f\2\q\r\j\i\3\0\1\k\d\d\k\3\r\x\5\k\f\4\9\7\y\1\3\t\k\f\n\p\v\x\2\p\9\l\s\5\t\h\v\u\7\q\1\v\6\5\8\l\0\h\q\f\d\i\i\u\j\a\b\1\b\g\m\f\4\q\n\f\x\7\i\2\w\s\w\m\4\p\d\k\r\h\2\3\z\h\0\j\c\3\h\c\o\n\b\v\9\l\2\t\1\b\m\q\7\5\r\n\d\m\n\g\v\9\i\i\d\2\x\u\g\t\7\f\w\4\y\n\i\d\4\z\h\p\6\8\9\k\i\w\e\6\o\g\8\w\s\m\6\0\s\l\p\q\8\i\t ]] 00:07:12.885 00:19:28 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.885 00:19:28 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.885 00:19:28 -- dd/common.sh@98 -- # xtrace_disable 00:07:12.885 00:19:28 -- common/autotest_common.sh@10 -- # set +x 00:07:12.885 00:19:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.885 00:19:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.885 [2024-09-29 00:19:28.604955] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:12.885 [2024-09-29 00:19:28.605062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58406 ] 00:07:13.145 [2024-09-29 00:19:28.740849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.145 [2024-09-29 00:19:28.810503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.403  Copying: 512/512 [B] (average 500 kBps) 00:07:13.403 00:07:13.403 00:19:29 -- dd/posix.sh@93 -- # [[ 3clcwelovrorvu8abkbr45mj3r6aaj5jbbszt85otti0xb4j65gsgsn1lnklp1011fpyx7f4hkbxhc1ayrm6gffm08suixvzuyn1zhv95k47jywpsvklsj73azi0uvvjehv6tcfnbl12apuk03ruus905tjqs9zvdnnob58ihd2978icm0amvt2rf4wzp982p9dggmsquwjnqlyv4iu1ihz6xtiqa32db5ibu972qket214d5awxlnnoj4d61od3oegotnlnd72ryqzqhabtvso0t3ra2y0cv3z6la3wy9essfvj2okoxnvs409s0bsv48npndshtj94262qlsa6perl61xe9liw1a8n60g6rh76xpx55h4m3eto19x9cljwy0lyp1inasbpvoielyad96dyh5x21h7sehjx4u8ibpa54zdpnq9nd2hg0mvlk6kdng994wmn3r46wqa0gmcond9wjhhx33cvfroruui4aux9mcy6cl25tsbb55j35c8f == \3\c\l\c\w\e\l\o\v\r\o\r\v\u\8\a\b\k\b\r\4\5\m\j\3\r\6\a\a\j\5\j\b\b\s\z\t\8\5\o\t\t\i\0\x\b\4\j\6\5\g\s\g\s\n\1\l\n\k\l\p\1\0\1\1\f\p\y\x\7\f\4\h\k\b\x\h\c\1\a\y\r\m\6\g\f\f\m\0\8\s\u\i\x\v\z\u\y\n\1\z\h\v\9\5\k\4\7\j\y\w\p\s\v\k\l\s\j\7\3\a\z\i\0\u\v\v\j\e\h\v\6\t\c\f\n\b\l\1\2\a\p\u\k\0\3\r\u\u\s\9\0\5\t\j\q\s\9\z\v\d\n\n\o\b\5\8\i\h\d\2\9\7\8\i\c\m\0\a\m\v\t\2\r\f\4\w\z\p\9\8\2\p\9\d\g\g\m\s\q\u\w\j\n\q\l\y\v\4\i\u\1\i\h\z\6\x\t\i\q\a\3\2\d\b\5\i\b\u\9\7\2\q\k\e\t\2\1\4\d\5\a\w\x\l\n\n\o\j\4\d\6\1\o\d\3\o\e\g\o\t\n\l\n\d\7\2\r\y\q\z\q\h\a\b\t\v\s\o\0\t\3\r\a\2\y\0\c\v\3\z\6\l\a\3\w\y\9\e\s\s\f\v\j\2\o\k\o\x\n\v\s\4\0\9\s\0\b\s\v\4\8\n\p\n\d\s\h\t\j\9\4\2\6\2\q\l\s\a\6\p\e\r\l\6\1\x\e\9\l\i\w\1\a\8\n\6\0\g\6\r\h\7\6\x\p\x\5\5\h\4\m\3\e\t\o\1\9\x\9\c\l\j\w\y\0\l\y\p\1\i\n\a\s\b\p\v\o\i\e\l\y\a\d\9\6\d\y\h\5\x\2\1\h\7\s\e\h\j\x\4\u\8\i\b\p\a\5\4\z\d\p\n\q\9\n\d\2\h\g\0\m\v\l\k\6\k\d\n\g\9\9\4\w\m\n\3\r\4\6\w\q\a\0\g\m\c\o\n\d\9\w\j\h\h\x\3\3\c\v\f\r\o\r\u\u\i\4\a\u\x\9\m\c\y\6\c\l\2\5\t\s\b\b\5\5\j\3\5\c\8\f ]] 00:07:13.403 00:19:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.403 00:19:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:13.403 [2024-09-29 00:19:29.096084] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:13.403 [2024-09-29 00:19:29.096217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:07:13.403 [2024-09-29 00:19:29.233325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.662 [2024-09-29 00:19:29.285618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.921  Copying: 512/512 [B] (average 500 kBps) 00:07:13.921 00:07:13.921 00:19:29 -- dd/posix.sh@93 -- # [[ 3clcwelovrorvu8abkbr45mj3r6aaj5jbbszt85otti0xb4j65gsgsn1lnklp1011fpyx7f4hkbxhc1ayrm6gffm08suixvzuyn1zhv95k47jywpsvklsj73azi0uvvjehv6tcfnbl12apuk03ruus905tjqs9zvdnnob58ihd2978icm0amvt2rf4wzp982p9dggmsquwjnqlyv4iu1ihz6xtiqa32db5ibu972qket214d5awxlnnoj4d61od3oegotnlnd72ryqzqhabtvso0t3ra2y0cv3z6la3wy9essfvj2okoxnvs409s0bsv48npndshtj94262qlsa6perl61xe9liw1a8n60g6rh76xpx55h4m3eto19x9cljwy0lyp1inasbpvoielyad96dyh5x21h7sehjx4u8ibpa54zdpnq9nd2hg0mvlk6kdng994wmn3r46wqa0gmcond9wjhhx33cvfroruui4aux9mcy6cl25tsbb55j35c8f == \3\c\l\c\w\e\l\o\v\r\o\r\v\u\8\a\b\k\b\r\4\5\m\j\3\r\6\a\a\j\5\j\b\b\s\z\t\8\5\o\t\t\i\0\x\b\4\j\6\5\g\s\g\s\n\1\l\n\k\l\p\1\0\1\1\f\p\y\x\7\f\4\h\k\b\x\h\c\1\a\y\r\m\6\g\f\f\m\0\8\s\u\i\x\v\z\u\y\n\1\z\h\v\9\5\k\4\7\j\y\w\p\s\v\k\l\s\j\7\3\a\z\i\0\u\v\v\j\e\h\v\6\t\c\f\n\b\l\1\2\a\p\u\k\0\3\r\u\u\s\9\0\5\t\j\q\s\9\z\v\d\n\n\o\b\5\8\i\h\d\2\9\7\8\i\c\m\0\a\m\v\t\2\r\f\4\w\z\p\9\8\2\p\9\d\g\g\m\s\q\u\w\j\n\q\l\y\v\4\i\u\1\i\h\z\6\x\t\i\q\a\3\2\d\b\5\i\b\u\9\7\2\q\k\e\t\2\1\4\d\5\a\w\x\l\n\n\o\j\4\d\6\1\o\d\3\o\e\g\o\t\n\l\n\d\7\2\r\y\q\z\q\h\a\b\t\v\s\o\0\t\3\r\a\2\y\0\c\v\3\z\6\l\a\3\w\y\9\e\s\s\f\v\j\2\o\k\o\x\n\v\s\4\0\9\s\0\b\s\v\4\8\n\p\n\d\s\h\t\j\9\4\2\6\2\q\l\s\a\6\p\e\r\l\6\1\x\e\9\l\i\w\1\a\8\n\6\0\g\6\r\h\7\6\x\p\x\5\5\h\4\m\3\e\t\o\1\9\x\9\c\l\j\w\y\0\l\y\p\1\i\n\a\s\b\p\v\o\i\e\l\y\a\d\9\6\d\y\h\5\x\2\1\h\7\s\e\h\j\x\4\u\8\i\b\p\a\5\4\z\d\p\n\q\9\n\d\2\h\g\0\m\v\l\k\6\k\d\n\g\9\9\4\w\m\n\3\r\4\6\w\q\a\0\g\m\c\o\n\d\9\w\j\h\h\x\3\3\c\v\f\r\o\r\u\u\i\4\a\u\x\9\m\c\y\6\c\l\2\5\t\s\b\b\5\5\j\3\5\c\8\f ]] 00:07:13.921 00:19:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.921 00:19:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.921 [2024-09-29 00:19:29.564607] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:13.921 [2024-09-29 00:19:29.564699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:07:13.921 [2024-09-29 00:19:29.701148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.921 [2024-09-29 00:19:29.758217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.181  Copying: 512/512 [B] (average 500 kBps) 00:07:14.181 00:07:14.181 00:19:29 -- dd/posix.sh@93 -- # [[ 3clcwelovrorvu8abkbr45mj3r6aaj5jbbszt85otti0xb4j65gsgsn1lnklp1011fpyx7f4hkbxhc1ayrm6gffm08suixvzuyn1zhv95k47jywpsvklsj73azi0uvvjehv6tcfnbl12apuk03ruus905tjqs9zvdnnob58ihd2978icm0amvt2rf4wzp982p9dggmsquwjnqlyv4iu1ihz6xtiqa32db5ibu972qket214d5awxlnnoj4d61od3oegotnlnd72ryqzqhabtvso0t3ra2y0cv3z6la3wy9essfvj2okoxnvs409s0bsv48npndshtj94262qlsa6perl61xe9liw1a8n60g6rh76xpx55h4m3eto19x9cljwy0lyp1inasbpvoielyad96dyh5x21h7sehjx4u8ibpa54zdpnq9nd2hg0mvlk6kdng994wmn3r46wqa0gmcond9wjhhx33cvfroruui4aux9mcy6cl25tsbb55j35c8f == \3\c\l\c\w\e\l\o\v\r\o\r\v\u\8\a\b\k\b\r\4\5\m\j\3\r\6\a\a\j\5\j\b\b\s\z\t\8\5\o\t\t\i\0\x\b\4\j\6\5\g\s\g\s\n\1\l\n\k\l\p\1\0\1\1\f\p\y\x\7\f\4\h\k\b\x\h\c\1\a\y\r\m\6\g\f\f\m\0\8\s\u\i\x\v\z\u\y\n\1\z\h\v\9\5\k\4\7\j\y\w\p\s\v\k\l\s\j\7\3\a\z\i\0\u\v\v\j\e\h\v\6\t\c\f\n\b\l\1\2\a\p\u\k\0\3\r\u\u\s\9\0\5\t\j\q\s\9\z\v\d\n\n\o\b\5\8\i\h\d\2\9\7\8\i\c\m\0\a\m\v\t\2\r\f\4\w\z\p\9\8\2\p\9\d\g\g\m\s\q\u\w\j\n\q\l\y\v\4\i\u\1\i\h\z\6\x\t\i\q\a\3\2\d\b\5\i\b\u\9\7\2\q\k\e\t\2\1\4\d\5\a\w\x\l\n\n\o\j\4\d\6\1\o\d\3\o\e\g\o\t\n\l\n\d\7\2\r\y\q\z\q\h\a\b\t\v\s\o\0\t\3\r\a\2\y\0\c\v\3\z\6\l\a\3\w\y\9\e\s\s\f\v\j\2\o\k\o\x\n\v\s\4\0\9\s\0\b\s\v\4\8\n\p\n\d\s\h\t\j\9\4\2\6\2\q\l\s\a\6\p\e\r\l\6\1\x\e\9\l\i\w\1\a\8\n\6\0\g\6\r\h\7\6\x\p\x\5\5\h\4\m\3\e\t\o\1\9\x\9\c\l\j\w\y\0\l\y\p\1\i\n\a\s\b\p\v\o\i\e\l\y\a\d\9\6\d\y\h\5\x\2\1\h\7\s\e\h\j\x\4\u\8\i\b\p\a\5\4\z\d\p\n\q\9\n\d\2\h\g\0\m\v\l\k\6\k\d\n\g\9\9\4\w\m\n\3\r\4\6\w\q\a\0\g\m\c\o\n\d\9\w\j\h\h\x\3\3\c\v\f\r\o\r\u\u\i\4\a\u\x\9\m\c\y\6\c\l\2\5\t\s\b\b\5\5\j\3\5\c\8\f ]] 00:07:14.181 00:19:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.181 00:19:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.181 [2024-09-29 00:19:30.022793] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.181 [2024-09-29 00:19:30.022896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58429 ] 00:07:14.440 [2024-09-29 00:19:30.161658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.440 [2024-09-29 00:19:30.210637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.699  Copying: 512/512 [B] (average 500 kBps) 00:07:14.699 00:07:14.699 ************************************ 00:07:14.699 END TEST dd_flags_misc_forced_aio 00:07:14.699 ************************************ 00:07:14.699 00:19:30 -- dd/posix.sh@93 -- # [[ 3clcwelovrorvu8abkbr45mj3r6aaj5jbbszt85otti0xb4j65gsgsn1lnklp1011fpyx7f4hkbxhc1ayrm6gffm08suixvzuyn1zhv95k47jywpsvklsj73azi0uvvjehv6tcfnbl12apuk03ruus905tjqs9zvdnnob58ihd2978icm0amvt2rf4wzp982p9dggmsquwjnqlyv4iu1ihz6xtiqa32db5ibu972qket214d5awxlnnoj4d61od3oegotnlnd72ryqzqhabtvso0t3ra2y0cv3z6la3wy9essfvj2okoxnvs409s0bsv48npndshtj94262qlsa6perl61xe9liw1a8n60g6rh76xpx55h4m3eto19x9cljwy0lyp1inasbpvoielyad96dyh5x21h7sehjx4u8ibpa54zdpnq9nd2hg0mvlk6kdng994wmn3r46wqa0gmcond9wjhhx33cvfroruui4aux9mcy6cl25tsbb55j35c8f == \3\c\l\c\w\e\l\o\v\r\o\r\v\u\8\a\b\k\b\r\4\5\m\j\3\r\6\a\a\j\5\j\b\b\s\z\t\8\5\o\t\t\i\0\x\b\4\j\6\5\g\s\g\s\n\1\l\n\k\l\p\1\0\1\1\f\p\y\x\7\f\4\h\k\b\x\h\c\1\a\y\r\m\6\g\f\f\m\0\8\s\u\i\x\v\z\u\y\n\1\z\h\v\9\5\k\4\7\j\y\w\p\s\v\k\l\s\j\7\3\a\z\i\0\u\v\v\j\e\h\v\6\t\c\f\n\b\l\1\2\a\p\u\k\0\3\r\u\u\s\9\0\5\t\j\q\s\9\z\v\d\n\n\o\b\5\8\i\h\d\2\9\7\8\i\c\m\0\a\m\v\t\2\r\f\4\w\z\p\9\8\2\p\9\d\g\g\m\s\q\u\w\j\n\q\l\y\v\4\i\u\1\i\h\z\6\x\t\i\q\a\3\2\d\b\5\i\b\u\9\7\2\q\k\e\t\2\1\4\d\5\a\w\x\l\n\n\o\j\4\d\6\1\o\d\3\o\e\g\o\t\n\l\n\d\7\2\r\y\q\z\q\h\a\b\t\v\s\o\0\t\3\r\a\2\y\0\c\v\3\z\6\l\a\3\w\y\9\e\s\s\f\v\j\2\o\k\o\x\n\v\s\4\0\9\s\0\b\s\v\4\8\n\p\n\d\s\h\t\j\9\4\2\6\2\q\l\s\a\6\p\e\r\l\6\1\x\e\9\l\i\w\1\a\8\n\6\0\g\6\r\h\7\6\x\p\x\5\5\h\4\m\3\e\t\o\1\9\x\9\c\l\j\w\y\0\l\y\p\1\i\n\a\s\b\p\v\o\i\e\l\y\a\d\9\6\d\y\h\5\x\2\1\h\7\s\e\h\j\x\4\u\8\i\b\p\a\5\4\z\d\p\n\q\9\n\d\2\h\g\0\m\v\l\k\6\k\d\n\g\9\9\4\w\m\n\3\r\4\6\w\q\a\0\g\m\c\o\n\d\9\w\j\h\h\x\3\3\c\v\f\r\o\r\u\u\i\4\a\u\x\9\m\c\y\6\c\l\2\5\t\s\b\b\5\5\j\3\5\c\8\f ]] 00:07:14.699 00:07:14.699 real 0m3.877s 00:07:14.699 user 0m2.111s 00:07:14.699 sys 0m0.772s 00:07:14.699 00:19:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.699 00:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.699 00:19:30 -- dd/posix.sh@1 -- # cleanup 00:07:14.699 00:19:30 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:14.699 00:19:30 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:14.699 ************************************ 00:07:14.699 END TEST spdk_dd_posix 00:07:14.699 ************************************ 00:07:14.699 00:07:14.699 real 0m18.017s 00:07:14.699 user 0m8.754s 00:07:14.699 sys 0m3.422s 00:07:14.699 00:19:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.699 00:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.959 00:19:30 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:14.959 00:19:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.959 00:19:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.959 00:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.959 ************************************ 00:07:14.959 START TEST spdk_dd_malloc 00:07:14.959 ************************************ 00:07:14.959 00:19:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:14.959 * Looking for test storage... 00:07:14.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.959 00:19:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.959 00:19:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.959 00:19:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.959 00:19:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.959 00:19:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.959 00:19:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.959 00:19:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.959 00:19:30 -- paths/export.sh@5 -- # export PATH 00:07:14.959 00:19:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.959 00:19:30 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:14.959 00:19:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.959 00:19:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.959 00:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.959 ************************************ 00:07:14.959 START TEST dd_malloc_copy 00:07:14.959 ************************************ 00:07:14.959 00:19:30 -- common/autotest_common.sh@1104 -- # malloc_copy 00:07:14.959 00:19:30 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:14.959 00:19:30 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:14.959 00:19:30 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:14.959 00:19:30 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:14.959 00:19:30 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:14.959 00:19:30 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:14.959 00:19:30 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:14.959 00:19:30 -- dd/malloc.sh@28 -- # gen_conf 00:07:14.959 00:19:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:14.959 00:19:30 -- common/autotest_common.sh@10 -- # set +x 00:07:14.959 [2024-09-29 00:19:30.729554] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:14.959 [2024-09-29 00:19:30.729653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58496 ] 00:07:14.959 { 00:07:14.959 "subsystems": [ 00:07:14.959 { 00:07:14.959 "subsystem": "bdev", 00:07:14.959 "config": [ 00:07:14.959 { 00:07:14.959 "params": { 00:07:14.959 "block_size": 512, 00:07:14.959 "num_blocks": 1048576, 00:07:14.959 "name": "malloc0" 00:07:14.959 }, 00:07:14.959 "method": "bdev_malloc_create" 00:07:14.959 }, 00:07:14.959 { 00:07:14.959 "params": { 00:07:14.959 "block_size": 512, 00:07:14.959 "num_blocks": 1048576, 00:07:14.959 "name": "malloc1" 00:07:14.959 }, 00:07:14.959 "method": "bdev_malloc_create" 00:07:14.959 }, 00:07:14.959 { 00:07:14.959 "method": "bdev_wait_for_examine" 00:07:14.959 } 00:07:14.959 ] 00:07:14.959 } 00:07:14.959 ] 00:07:14.959 } 00:07:15.219 [2024-09-29 00:19:30.868489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.219 [2024-09-29 00:19:30.920479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.357  Copying: 201/512 [MB] (201 MBps) Copying: 392/512 [MB] (190 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:07:18.357 00:07:18.357 00:19:34 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:18.357 00:19:34 -- dd/malloc.sh@33 -- # gen_conf 00:07:18.357 00:19:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:18.357 00:19:34 -- common/autotest_common.sh@10 -- # set +x 00:07:18.616 [2024-09-29 00:19:34.223411] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:18.616 [2024-09-29 00:19:34.223513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58544 ] 00:07:18.616 { 00:07:18.616 "subsystems": [ 00:07:18.616 { 00:07:18.616 "subsystem": "bdev", 00:07:18.616 "config": [ 00:07:18.616 { 00:07:18.616 "params": { 00:07:18.616 "block_size": 512, 00:07:18.616 "num_blocks": 1048576, 00:07:18.616 "name": "malloc0" 00:07:18.616 }, 00:07:18.616 "method": "bdev_malloc_create" 00:07:18.616 }, 00:07:18.616 { 00:07:18.616 "params": { 00:07:18.616 "block_size": 512, 00:07:18.616 "num_blocks": 1048576, 00:07:18.616 "name": "malloc1" 00:07:18.616 }, 00:07:18.616 "method": "bdev_malloc_create" 00:07:18.616 }, 00:07:18.616 { 00:07:18.616 "method": "bdev_wait_for_examine" 00:07:18.616 } 00:07:18.616 ] 00:07:18.616 } 00:07:18.616 ] 00:07:18.616 } 00:07:18.616 [2024-09-29 00:19:34.360203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.616 [2024-09-29 00:19:34.424755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.080  Copying: 196/512 [MB] (196 MBps) Copying: 396/512 [MB] (200 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:07:22.080 00:07:22.080 00:07:22.080 real 0m6.996s 00:07:22.080 user 0m6.307s 00:07:22.080 sys 0m0.518s 00:07:22.080 00:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.080 ************************************ 00:07:22.080 END TEST dd_malloc_copy 00:07:22.080 ************************************ 00:07:22.080 00:19:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.080 ************************************ 00:07:22.080 END TEST spdk_dd_malloc 00:07:22.080 ************************************ 00:07:22.080 00:07:22.080 real 0m7.138s 00:07:22.080 user 0m6.353s 00:07:22.080 sys 0m0.611s 00:07:22.080 00:19:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.080 00:19:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.080 00:19:37 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:22.080 00:19:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:22.080 00:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.080 00:19:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.080 ************************************ 00:07:22.080 START TEST spdk_dd_bdev_to_bdev 00:07:22.080 ************************************ 00:07:22.080 00:19:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:22.080 * Looking for test storage... 00:07:22.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:22.080 00:19:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.080 00:19:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.080 00:19:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.080 00:19:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.080 00:19:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.080 00:19:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.080 00:19:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.080 00:19:37 -- paths/export.sh@5 -- # export PATH 00:07:22.081 00:19:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:22.081 00:19:37 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:22.081 00:19:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:22.081 00:19:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.081 00:19:37 -- common/autotest_common.sh@10 -- # set +x 00:07:22.081 ************************************ 00:07:22.081 START TEST dd_inflate_file 00:07:22.081 ************************************ 00:07:22.081 00:19:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:22.081 [2024-09-29 00:19:37.927985] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.081 [2024-09-29 00:19:37.928297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58647 ] 00:07:22.340 [2024-09-29 00:19:38.076041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.340 [2024-09-29 00:19:38.147290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.857  Copying: 64/64 [MB] (average 1641 MBps) 00:07:22.857 00:07:22.857 00:07:22.857 ************************************ 00:07:22.857 END TEST dd_inflate_file 00:07:22.857 ************************************ 00:07:22.857 real 0m0.605s 00:07:22.857 user 0m0.324s 00:07:22.857 sys 0m0.162s 00:07:22.857 00:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.857 00:19:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.857 00:19:38 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:22.857 00:19:38 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:22.857 00:19:38 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:22.857 00:19:38 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:22.857 00:19:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:22.857 00:19:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:22.857 00:19:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.857 00:19:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.857 00:19:38 -- common/autotest_common.sh@10 -- # set +x 00:07:22.857 ************************************ 00:07:22.857 START TEST dd_copy_to_out_bdev 00:07:22.857 ************************************ 00:07:22.857 00:19:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:22.857 [2024-09-29 00:19:38.573163] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:22.857 [2024-09-29 00:19:38.573257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58679 ] 00:07:22.857 { 00:07:22.857 "subsystems": [ 00:07:22.857 { 00:07:22.857 "subsystem": "bdev", 00:07:22.857 "config": [ 00:07:22.857 { 00:07:22.857 "params": { 00:07:22.857 "trtype": "pcie", 00:07:22.857 "traddr": "0000:00:06.0", 00:07:22.857 "name": "Nvme0" 00:07:22.857 }, 00:07:22.857 "method": "bdev_nvme_attach_controller" 00:07:22.857 }, 00:07:22.857 { 00:07:22.857 "params": { 00:07:22.857 "trtype": "pcie", 00:07:22.857 "traddr": "0000:00:07.0", 00:07:22.857 "name": "Nvme1" 00:07:22.857 }, 00:07:22.857 "method": "bdev_nvme_attach_controller" 00:07:22.857 }, 00:07:22.857 { 00:07:22.857 "method": "bdev_wait_for_examine" 00:07:22.857 } 00:07:22.857 ] 00:07:22.857 } 00:07:22.857 ] 00:07:22.857 } 00:07:23.117 [2024-09-29 00:19:38.712460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.117 [2024-09-29 00:19:38.783568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.751  Copying: 46/64 [MB] (46 MBps) Copying: 64/64 [MB] (average 47 MBps) 00:07:24.751 00:07:24.751 00:07:24.751 real 0m2.059s 00:07:24.751 user 0m1.810s 00:07:24.751 sys 0m0.184s 00:07:24.751 ************************************ 00:07:24.751 END TEST dd_copy_to_out_bdev 00:07:24.751 ************************************ 00:07:24.751 00:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.751 00:19:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:25.009 00:19:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:25.009 00:19:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.009 00:19:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.009 ************************************ 00:07:25.009 START TEST dd_offset_magic 00:07:25.009 ************************************ 00:07:25.009 00:19:40 -- common/autotest_common.sh@1104 -- # offset_magic 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:25.009 00:19:40 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:25.009 00:19:40 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.009 00:19:40 -- common/autotest_common.sh@10 -- # set +x 00:07:25.009 [2024-09-29 00:19:40.709389] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:25.010 [2024-09-29 00:19:40.710053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58723 ] 00:07:25.010 { 00:07:25.010 "subsystems": [ 00:07:25.010 { 00:07:25.010 "subsystem": "bdev", 00:07:25.010 "config": [ 00:07:25.010 { 00:07:25.010 "params": { 00:07:25.010 "trtype": "pcie", 00:07:25.010 "traddr": "0000:00:06.0", 00:07:25.010 "name": "Nvme0" 00:07:25.010 }, 00:07:25.010 "method": "bdev_nvme_attach_controller" 00:07:25.010 }, 00:07:25.010 { 00:07:25.010 "params": { 00:07:25.010 "trtype": "pcie", 00:07:25.010 "traddr": "0000:00:07.0", 00:07:25.010 "name": "Nvme1" 00:07:25.010 }, 00:07:25.010 "method": "bdev_nvme_attach_controller" 00:07:25.010 }, 00:07:25.010 { 00:07:25.010 "method": "bdev_wait_for_examine" 00:07:25.010 } 00:07:25.010 ] 00:07:25.010 } 00:07:25.010 ] 00:07:25.010 } 00:07:25.010 [2024-09-29 00:19:40.855193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.269 [2024-09-29 00:19:40.931992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.786  Copying: 65/65 [MB] (average 970 MBps) 00:07:25.786 00:07:25.786 00:19:41 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:25.786 00:19:41 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:25.786 00:19:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:25.786 00:19:41 -- common/autotest_common.sh@10 -- # set +x 00:07:25.786 [2024-09-29 00:19:41.452691] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:25.786 [2024-09-29 00:19:41.452788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58743 ] 00:07:25.786 { 00:07:25.786 "subsystems": [ 00:07:25.786 { 00:07:25.786 "subsystem": "bdev", 00:07:25.786 "config": [ 00:07:25.786 { 00:07:25.786 "params": { 00:07:25.786 "trtype": "pcie", 00:07:25.786 "traddr": "0000:00:06.0", 00:07:25.786 "name": "Nvme0" 00:07:25.786 }, 00:07:25.786 "method": "bdev_nvme_attach_controller" 00:07:25.786 }, 00:07:25.786 { 00:07:25.786 "params": { 00:07:25.786 "trtype": "pcie", 00:07:25.786 "traddr": "0000:00:07.0", 00:07:25.786 "name": "Nvme1" 00:07:25.786 }, 00:07:25.786 "method": "bdev_nvme_attach_controller" 00:07:25.786 }, 00:07:25.786 { 00:07:25.786 "method": "bdev_wait_for_examine" 00:07:25.786 } 00:07:25.786 ] 00:07:25.786 } 00:07:25.786 ] 00:07:25.786 } 00:07:25.786 [2024-09-29 00:19:41.588571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.045 [2024-09-29 00:19:41.639792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.333  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:26.333 00:07:26.333 00:19:42 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:26.333 00:19:42 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:26.333 00:19:42 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:26.333 00:19:42 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:26.333 00:19:42 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:26.333 00:19:42 -- dd/common.sh@31 -- # xtrace_disable 00:07:26.333 00:19:42 -- common/autotest_common.sh@10 -- # set +x 00:07:26.333 [2024-09-29 00:19:42.089451] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:26.333 [2024-09-29 00:19:42.089553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58762 ] 00:07:26.333 { 00:07:26.333 "subsystems": [ 00:07:26.333 { 00:07:26.333 "subsystem": "bdev", 00:07:26.333 "config": [ 00:07:26.333 { 00:07:26.333 "params": { 00:07:26.333 "trtype": "pcie", 00:07:26.333 "traddr": "0000:00:06.0", 00:07:26.333 "name": "Nvme0" 00:07:26.333 }, 00:07:26.333 "method": "bdev_nvme_attach_controller" 00:07:26.333 }, 00:07:26.333 { 00:07:26.333 "params": { 00:07:26.333 "trtype": "pcie", 00:07:26.333 "traddr": "0000:00:07.0", 00:07:26.333 "name": "Nvme1" 00:07:26.333 }, 00:07:26.333 "method": "bdev_nvme_attach_controller" 00:07:26.333 }, 00:07:26.333 { 00:07:26.333 "method": "bdev_wait_for_examine" 00:07:26.333 } 00:07:26.333 ] 00:07:26.333 } 00:07:26.333 ] 00:07:26.333 } 00:07:26.592 [2024-09-29 00:19:42.228914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.592 [2024-09-29 00:19:42.299875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.110  Copying: 65/65 [MB] (average 1083 MBps) 00:07:27.110 00:07:27.110 00:19:42 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:27.110 00:19:42 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:27.110 00:19:42 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.110 00:19:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.110 [2024-09-29 00:19:42.797750] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:27.110 [2024-09-29 00:19:42.798410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58772 ] 00:07:27.110 { 00:07:27.110 "subsystems": [ 00:07:27.110 { 00:07:27.110 "subsystem": "bdev", 00:07:27.110 "config": [ 00:07:27.110 { 00:07:27.110 "params": { 00:07:27.110 "trtype": "pcie", 00:07:27.110 "traddr": "0000:00:06.0", 00:07:27.110 "name": "Nvme0" 00:07:27.110 }, 00:07:27.110 "method": "bdev_nvme_attach_controller" 00:07:27.110 }, 00:07:27.110 { 00:07:27.110 "params": { 00:07:27.110 "trtype": "pcie", 00:07:27.110 "traddr": "0000:00:07.0", 00:07:27.110 "name": "Nvme1" 00:07:27.110 }, 00:07:27.110 "method": "bdev_nvme_attach_controller" 00:07:27.110 }, 00:07:27.110 { 00:07:27.110 "method": "bdev_wait_for_examine" 00:07:27.110 } 00:07:27.110 ] 00:07:27.110 } 00:07:27.110 ] 00:07:27.110 } 00:07:27.110 [2024-09-29 00:19:42.936838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.370 [2024-09-29 00:19:42.991178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.628  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:27.628 00:07:27.628 00:19:43 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:27.628 00:19:43 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:27.628 00:07:27.628 real 0m2.712s 00:07:27.628 user 0m2.056s 00:07:27.628 sys 0m0.478s 00:07:27.628 00:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.628 00:19:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.628 ************************************ 00:07:27.628 END TEST dd_offset_magic 00:07:27.628 ************************************ 00:07:27.628 00:19:43 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:27.628 00:19:43 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:27.628 00:19:43 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.628 00:19:43 -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.628 00:19:43 -- dd/common.sh@12 -- # local size=4194330 00:07:27.628 00:19:43 -- dd/common.sh@14 -- # local bs=1048576 00:07:27.628 00:19:43 -- dd/common.sh@15 -- # local count=5 00:07:27.628 00:19:43 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:27.628 00:19:43 -- dd/common.sh@18 -- # gen_conf 00:07:27.628 00:19:43 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.628 00:19:43 -- common/autotest_common.sh@10 -- # set +x 00:07:27.628 [2024-09-29 00:19:43.450077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:27.628 [2024-09-29 00:19:43.450161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58807 ] 00:07:27.628 { 00:07:27.628 "subsystems": [ 00:07:27.628 { 00:07:27.628 "subsystem": "bdev", 00:07:27.628 "config": [ 00:07:27.628 { 00:07:27.628 "params": { 00:07:27.628 "trtype": "pcie", 00:07:27.628 "traddr": "0000:00:06.0", 00:07:27.628 "name": "Nvme0" 00:07:27.628 }, 00:07:27.628 "method": "bdev_nvme_attach_controller" 00:07:27.628 }, 00:07:27.628 { 00:07:27.628 "params": { 00:07:27.628 "trtype": "pcie", 00:07:27.628 "traddr": "0000:00:07.0", 00:07:27.628 "name": "Nvme1" 00:07:27.628 }, 00:07:27.628 "method": "bdev_nvme_attach_controller" 00:07:27.628 }, 00:07:27.628 { 00:07:27.628 "method": "bdev_wait_for_examine" 00:07:27.628 } 00:07:27.628 ] 00:07:27.628 } 00:07:27.628 ] 00:07:27.628 } 00:07:27.886 [2024-09-29 00:19:43.586096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.886 [2024-09-29 00:19:43.634523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.403  Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:28.403 00:07:28.403 00:19:43 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:28.403 00:19:43 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:28.403 00:19:43 -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.403 00:19:43 -- dd/common.sh@12 -- # local size=4194330 00:07:28.403 00:19:43 -- dd/common.sh@14 -- # local bs=1048576 00:07:28.403 00:19:43 -- dd/common.sh@15 -- # local count=5 00:07:28.403 00:19:44 -- dd/common.sh@18 -- # gen_conf 00:07:28.403 00:19:44 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:28.403 00:19:44 -- dd/common.sh@31 -- # xtrace_disable 00:07:28.403 00:19:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.403 [2024-09-29 00:19:44.053604] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:28.403 [2024-09-29 00:19:44.053709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58816 ] 00:07:28.403 { 00:07:28.403 "subsystems": [ 00:07:28.403 { 00:07:28.403 "subsystem": "bdev", 00:07:28.403 "config": [ 00:07:28.403 { 00:07:28.403 "params": { 00:07:28.403 "trtype": "pcie", 00:07:28.403 "traddr": "0000:00:06.0", 00:07:28.403 "name": "Nvme0" 00:07:28.403 }, 00:07:28.403 "method": "bdev_nvme_attach_controller" 00:07:28.403 }, 00:07:28.403 { 00:07:28.403 "params": { 00:07:28.403 "trtype": "pcie", 00:07:28.403 "traddr": "0000:00:07.0", 00:07:28.403 "name": "Nvme1" 00:07:28.403 }, 00:07:28.403 "method": "bdev_nvme_attach_controller" 00:07:28.403 }, 00:07:28.403 { 00:07:28.403 "method": "bdev_wait_for_examine" 00:07:28.403 } 00:07:28.403 ] 00:07:28.403 } 00:07:28.403 ] 00:07:28.403 } 00:07:28.403 [2024-09-29 00:19:44.189487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.403 [2024-09-29 00:19:44.237788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.920  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:28.920 00:07:28.920 00:19:44 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:28.920 00:07:28.920 real 0m6.836s 00:07:28.920 user 0m5.154s 00:07:28.920 sys 0m1.209s 00:07:28.920 00:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.920 00:19:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.920 ************************************ 00:07:28.920 END TEST spdk_dd_bdev_to_bdev 00:07:28.920 ************************************ 00:07:28.920 00:19:44 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:28.920 00:19:44 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:28.920 00:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.920 00:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.920 00:19:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.920 ************************************ 00:07:28.920 START TEST spdk_dd_uring 00:07:28.920 ************************************ 00:07:28.920 00:19:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:28.920 * Looking for test storage... 00:07:28.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.920 00:19:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.920 00:19:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.920 00:19:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.920 00:19:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.920 00:19:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.920 00:19:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.920 00:19:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.920 00:19:44 -- paths/export.sh@5 -- # export PATH 00:07:28.920 00:19:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.920 00:19:44 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:28.920 00:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.920 00:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.920 00:19:44 -- common/autotest_common.sh@10 -- # set +x 00:07:28.920 ************************************ 00:07:28.920 START TEST dd_uring_copy 00:07:28.920 ************************************ 00:07:28.920 00:19:44 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:07:28.920 00:19:44 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:28.920 00:19:44 -- dd/uring.sh@16 -- # local magic 00:07:28.920 00:19:44 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:28.920 00:19:44 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:28.920 00:19:44 -- dd/uring.sh@19 -- # local verify_magic 00:07:28.920 00:19:44 -- dd/uring.sh@21 -- # init_zram 00:07:28.920 00:19:44 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:28.920 00:19:44 -- dd/common.sh@164 -- # return 00:07:28.920 00:19:44 -- dd/uring.sh@22 -- # create_zram_dev 00:07:28.920 00:19:44 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:28.920 00:19:44 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:28.920 00:19:44 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:28.920 00:19:44 -- dd/common.sh@181 -- # local id=1 00:07:28.920 00:19:44 -- dd/common.sh@182 -- # local size=512M 00:07:28.920 00:19:44 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:28.920 00:19:44 -- dd/common.sh@186 -- # echo 512M 00:07:28.920 00:19:44 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:28.920 00:19:44 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:28.920 00:19:44 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:28.920 00:19:44 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:28.920 00:19:44 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:28.920 00:19:44 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:28.920 00:19:44 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:28.920 00:19:44 -- dd/common.sh@98 -- # xtrace_disable 00:07:28.920 00:19:44 -- common/autotest_common.sh@10 -- # set +x 00:07:29.179 00:19:44 -- dd/uring.sh@41 -- # magic=sdswzyco1dl4u2uqjcqbef8a5ka9vfx3lprkn76doqfhcwadhuw3iqsh4sakr2izr5mowo2hy3oactz0d94fwapxrtkme5gihu8via3yz24yuxgm7jqg8lfuq4s4en6jvlmf4gh3ev9n2b8u59d55d8t1o2p82nc1okmcx5tqj87txo64dgg06onos0xc0prgkhzysjzrmgdpilifspdlmkzqbf62o1p3lvkme5em8zi4th1ap57a1f8s0qovyzplh8sppk7y0gu1oq4vece2fq7jr9ng2miie9wchmh75pnk9qh6d70lwibqv6erb102jylc54551jmk66110vjdkjo34c9fzo7mmwhpvfq2iv8op1ytbeprpcrurmcd8ht861fqvsj03exc87xnxt3rgdcl7m3atv5krdoqgsfopmp0dizj9wk9mrnd3f5rb1t9eapjszbmq3jp01gj7e4wzd8sn6pjivk2iofg7z5mgt2mv6kq3bb8sxala4qdw16vhp4oqlxub711dbrer0mwymv68atyzjlfi2y5ypli8r40u9heh29jxehna484744ugjldx8dr3cf7y2joxbt2523csxqzxyf2zqo2b02i57yinlzhodeiksix9sfazi8pgy2eq9m5z2gfvkh7uu5zagf073kfauqm29z7e9ishomkwxzclug041cqfqdqvovc533y8neba9wafy47uq4ja0jii5p0gjo23e1kozxkcyan7blt69agpo5x2y7juhwh2x79sgbibh53gj0z07070we39t7q4k8e5gmrjtecziqnx9nxhgrlc8bwhx6yadm4dnbmykpnyk26vuraaypsz0s83wiyhm920qyf4i5g6c8e7wbetexmvffv43h2wngllpxw5c71vlhat2wxvh0tlh98za3vcldl7exhwx3ol4v0l9xv7gefneqggvr3j8gvwzpzfasd7ii2hdmd8u91dlmee8ojwet9kzfzkl9hkvzyvuqt9dcusp3w11q1i9w 00:07:29.179 00:19:44 -- dd/uring.sh@42 -- # echo sdswzyco1dl4u2uqjcqbef8a5ka9vfx3lprkn76doqfhcwadhuw3iqsh4sakr2izr5mowo2hy3oactz0d94fwapxrtkme5gihu8via3yz24yuxgm7jqg8lfuq4s4en6jvlmf4gh3ev9n2b8u59d55d8t1o2p82nc1okmcx5tqj87txo64dgg06onos0xc0prgkhzysjzrmgdpilifspdlmkzqbf62o1p3lvkme5em8zi4th1ap57a1f8s0qovyzplh8sppk7y0gu1oq4vece2fq7jr9ng2miie9wchmh75pnk9qh6d70lwibqv6erb102jylc54551jmk66110vjdkjo34c9fzo7mmwhpvfq2iv8op1ytbeprpcrurmcd8ht861fqvsj03exc87xnxt3rgdcl7m3atv5krdoqgsfopmp0dizj9wk9mrnd3f5rb1t9eapjszbmq3jp01gj7e4wzd8sn6pjivk2iofg7z5mgt2mv6kq3bb8sxala4qdw16vhp4oqlxub711dbrer0mwymv68atyzjlfi2y5ypli8r40u9heh29jxehna484744ugjldx8dr3cf7y2joxbt2523csxqzxyf2zqo2b02i57yinlzhodeiksix9sfazi8pgy2eq9m5z2gfvkh7uu5zagf073kfauqm29z7e9ishomkwxzclug041cqfqdqvovc533y8neba9wafy47uq4ja0jii5p0gjo23e1kozxkcyan7blt69agpo5x2y7juhwh2x79sgbibh53gj0z07070we39t7q4k8e5gmrjtecziqnx9nxhgrlc8bwhx6yadm4dnbmykpnyk26vuraaypsz0s83wiyhm920qyf4i5g6c8e7wbetexmvffv43h2wngllpxw5c71vlhat2wxvh0tlh98za3vcldl7exhwx3ol4v0l9xv7gefneqggvr3j8gvwzpzfasd7ii2hdmd8u91dlmee8ojwet9kzfzkl9hkvzyvuqt9dcusp3w11q1i9w 00:07:29.179 00:19:44 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:29.179 [2024-09-29 00:19:44.809181] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:29.179 [2024-09-29 00:19:44.809280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58884 ] 00:07:29.179 [2024-09-29 00:19:44.938151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.180 [2024-09-29 00:19:44.986912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.006  Copying: 511/511 [MB] (average 1841 MBps) 00:07:30.007 00:07:30.007 00:19:45 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:30.007 00:19:45 -- dd/uring.sh@54 -- # gen_conf 00:07:30.007 00:19:45 -- dd/common.sh@31 -- # xtrace_disable 00:07:30.007 00:19:45 -- common/autotest_common.sh@10 -- # set +x 00:07:30.007 [2024-09-29 00:19:45.712026] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:30.007 [2024-09-29 00:19:45.712124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58898 ] 00:07:30.007 { 00:07:30.007 "subsystems": [ 00:07:30.007 { 00:07:30.007 "subsystem": "bdev", 00:07:30.007 "config": [ 00:07:30.007 { 00:07:30.007 "params": { 00:07:30.007 "block_size": 512, 00:07:30.007 "num_blocks": 1048576, 00:07:30.007 "name": "malloc0" 00:07:30.007 }, 00:07:30.007 "method": "bdev_malloc_create" 00:07:30.007 }, 00:07:30.007 { 00:07:30.007 "params": { 00:07:30.007 "filename": "/dev/zram1", 00:07:30.007 "name": "uring0" 00:07:30.007 }, 00:07:30.007 "method": "bdev_uring_create" 00:07:30.007 }, 00:07:30.007 { 00:07:30.007 "method": "bdev_wait_for_examine" 00:07:30.007 } 00:07:30.007 ] 00:07:30.007 } 00:07:30.007 ] 00:07:30.007 } 00:07:30.007 [2024-09-29 00:19:45.845129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.265 [2024-09-29 00:19:45.894429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.845  Copying: 237/512 [MB] (237 MBps) Copying: 468/512 [MB] (231 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:07:32.845 00:07:32.845 00:19:48 -- dd/uring.sh@60 -- # gen_conf 00:07:32.845 00:19:48 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:32.845 00:19:48 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.845 00:19:48 -- common/autotest_common.sh@10 -- # set +x 00:07:32.845 [2024-09-29 00:19:48.571925] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:32.845 [2024-09-29 00:19:48.572020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:07:32.845 { 00:07:32.845 "subsystems": [ 00:07:32.845 { 00:07:32.845 "subsystem": "bdev", 00:07:32.845 "config": [ 00:07:32.845 { 00:07:32.845 "params": { 00:07:32.845 "block_size": 512, 00:07:32.845 "num_blocks": 1048576, 00:07:32.845 "name": "malloc0" 00:07:32.845 }, 00:07:32.845 "method": "bdev_malloc_create" 00:07:32.845 }, 00:07:32.845 { 00:07:32.845 "params": { 00:07:32.845 "filename": "/dev/zram1", 00:07:32.845 "name": "uring0" 00:07:32.845 }, 00:07:32.845 "method": "bdev_uring_create" 00:07:32.845 }, 00:07:32.845 { 00:07:32.845 "method": "bdev_wait_for_examine" 00:07:32.845 } 00:07:32.845 ] 00:07:32.845 } 00:07:32.845 ] 00:07:32.845 } 00:07:33.104 [2024-09-29 00:19:48.709723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.104 [2024-09-29 00:19:48.770125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.197  Copying: 134/512 [MB] (134 MBps) Copying: 261/512 [MB] (126 MBps) Copying: 403/512 [MB] (141 MBps) Copying: 512/512 [MB] (average 135 MBps) 00:07:37.197 00:07:37.197 00:19:52 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:37.197 00:19:52 -- dd/uring.sh@66 -- # [[ sdswzyco1dl4u2uqjcqbef8a5ka9vfx3lprkn76doqfhcwadhuw3iqsh4sakr2izr5mowo2hy3oactz0d94fwapxrtkme5gihu8via3yz24yuxgm7jqg8lfuq4s4en6jvlmf4gh3ev9n2b8u59d55d8t1o2p82nc1okmcx5tqj87txo64dgg06onos0xc0prgkhzysjzrmgdpilifspdlmkzqbf62o1p3lvkme5em8zi4th1ap57a1f8s0qovyzplh8sppk7y0gu1oq4vece2fq7jr9ng2miie9wchmh75pnk9qh6d70lwibqv6erb102jylc54551jmk66110vjdkjo34c9fzo7mmwhpvfq2iv8op1ytbeprpcrurmcd8ht861fqvsj03exc87xnxt3rgdcl7m3atv5krdoqgsfopmp0dizj9wk9mrnd3f5rb1t9eapjszbmq3jp01gj7e4wzd8sn6pjivk2iofg7z5mgt2mv6kq3bb8sxala4qdw16vhp4oqlxub711dbrer0mwymv68atyzjlfi2y5ypli8r40u9heh29jxehna484744ugjldx8dr3cf7y2joxbt2523csxqzxyf2zqo2b02i57yinlzhodeiksix9sfazi8pgy2eq9m5z2gfvkh7uu5zagf073kfauqm29z7e9ishomkwxzclug041cqfqdqvovc533y8neba9wafy47uq4ja0jii5p0gjo23e1kozxkcyan7blt69agpo5x2y7juhwh2x79sgbibh53gj0z07070we39t7q4k8e5gmrjtecziqnx9nxhgrlc8bwhx6yadm4dnbmykpnyk26vuraaypsz0s83wiyhm920qyf4i5g6c8e7wbetexmvffv43h2wngllpxw5c71vlhat2wxvh0tlh98za3vcldl7exhwx3ol4v0l9xv7gefneqggvr3j8gvwzpzfasd7ii2hdmd8u91dlmee8ojwet9kzfzkl9hkvzyvuqt9dcusp3w11q1i9w == \s\d\s\w\z\y\c\o\1\d\l\4\u\2\u\q\j\c\q\b\e\f\8\a\5\k\a\9\v\f\x\3\l\p\r\k\n\7\6\d\o\q\f\h\c\w\a\d\h\u\w\3\i\q\s\h\4\s\a\k\r\2\i\z\r\5\m\o\w\o\2\h\y\3\o\a\c\t\z\0\d\9\4\f\w\a\p\x\r\t\k\m\e\5\g\i\h\u\8\v\i\a\3\y\z\2\4\y\u\x\g\m\7\j\q\g\8\l\f\u\q\4\s\4\e\n\6\j\v\l\m\f\4\g\h\3\e\v\9\n\2\b\8\u\5\9\d\5\5\d\8\t\1\o\2\p\8\2\n\c\1\o\k\m\c\x\5\t\q\j\8\7\t\x\o\6\4\d\g\g\0\6\o\n\o\s\0\x\c\0\p\r\g\k\h\z\y\s\j\z\r\m\g\d\p\i\l\i\f\s\p\d\l\m\k\z\q\b\f\6\2\o\1\p\3\l\v\k\m\e\5\e\m\8\z\i\4\t\h\1\a\p\5\7\a\1\f\8\s\0\q\o\v\y\z\p\l\h\8\s\p\p\k\7\y\0\g\u\1\o\q\4\v\e\c\e\2\f\q\7\j\r\9\n\g\2\m\i\i\e\9\w\c\h\m\h\7\5\p\n\k\9\q\h\6\d\7\0\l\w\i\b\q\v\6\e\r\b\1\0\2\j\y\l\c\5\4\5\5\1\j\m\k\6\6\1\1\0\v\j\d\k\j\o\3\4\c\9\f\z\o\7\m\m\w\h\p\v\f\q\2\i\v\8\o\p\1\y\t\b\e\p\r\p\c\r\u\r\m\c\d\8\h\t\8\6\1\f\q\v\s\j\0\3\e\x\c\8\7\x\n\x\t\3\r\g\d\c\l\7\m\3\a\t\v\5\k\r\d\o\q\g\s\f\o\p\m\p\0\d\i\z\j\9\w\k\9\m\r\n\d\3\f\5\r\b\1\t\9\e\a\p\j\s\z\b\m\q\3\j\p\0\1\g\j\7\e\4\w\z\d\8\s\n\6\p\j\i\v\k\2\i\o\f\g\7\z\5\m\g\t\2\m\v\6\k\q\3\b\b\8\s\x\a\l\a\4\q\d\w\1\6\v\h\p\4\o\q\l\x\u\b\7\1\1\d\b\r\e\r\0\m\w\y\m\v\6\8\a\t\y\z\j\l\f\i\2\y\5\y\p\l\i\8\r\4\0\u\9\h\e\h\2\9\j\x\e\h\n\a\4\8\4\7\4\4\u\g\j\l\d\x\8\d\r\3\c\f\7\y\2\j\o\x\b\t\2\5\2\3\c\s\x\q\z\x\y\f\2\z\q\o\2\b\0\2\i\5\7\y\i\n\l\z\h\o\d\e\i\k\s\i\x\9\s\f\a\z\i\8\p\g\y\2\e\q\9\m\5\z\2\g\f\v\k\h\7\u\u\5\z\a\g\f\0\7\3\k\f\a\u\q\m\2\9\z\7\e\9\i\s\h\o\m\k\w\x\z\c\l\u\g\0\4\1\c\q\f\q\d\q\v\o\v\c\5\3\3\y\8\n\e\b\a\9\w\a\f\y\4\7\u\q\4\j\a\0\j\i\i\5\p\0\g\j\o\2\3\e\1\k\o\z\x\k\c\y\a\n\7\b\l\t\6\9\a\g\p\o\5\x\2\y\7\j\u\h\w\h\2\x\7\9\s\g\b\i\b\h\5\3\g\j\0\z\0\7\0\7\0\w\e\3\9\t\7\q\4\k\8\e\5\g\m\r\j\t\e\c\z\i\q\n\x\9\n\x\h\g\r\l\c\8\b\w\h\x\6\y\a\d\m\4\d\n\b\m\y\k\p\n\y\k\2\6\v\u\r\a\a\y\p\s\z\0\s\8\3\w\i\y\h\m\9\2\0\q\y\f\4\i\5\g\6\c\8\e\7\w\b\e\t\e\x\m\v\f\f\v\4\3\h\2\w\n\g\l\l\p\x\w\5\c\7\1\v\l\h\a\t\2\w\x\v\h\0\t\l\h\9\8\z\a\3\v\c\l\d\l\7\e\x\h\w\x\3\o\l\4\v\0\l\9\x\v\7\g\e\f\n\e\q\g\g\v\r\3\j\8\g\v\w\z\p\z\f\a\s\d\7\i\i\2\h\d\m\d\8\u\9\1\d\l\m\e\e\8\o\j\w\e\t\9\k\z\f\z\k\l\9\h\k\v\z\y\v\u\q\t\9\d\c\u\s\p\3\w\1\1\q\1\i\9\w ]] 00:07:37.197 00:19:52 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:37.198 00:19:52 -- dd/uring.sh@69 -- # [[ sdswzyco1dl4u2uqjcqbef8a5ka9vfx3lprkn76doqfhcwadhuw3iqsh4sakr2izr5mowo2hy3oactz0d94fwapxrtkme5gihu8via3yz24yuxgm7jqg8lfuq4s4en6jvlmf4gh3ev9n2b8u59d55d8t1o2p82nc1okmcx5tqj87txo64dgg06onos0xc0prgkhzysjzrmgdpilifspdlmkzqbf62o1p3lvkme5em8zi4th1ap57a1f8s0qovyzplh8sppk7y0gu1oq4vece2fq7jr9ng2miie9wchmh75pnk9qh6d70lwibqv6erb102jylc54551jmk66110vjdkjo34c9fzo7mmwhpvfq2iv8op1ytbeprpcrurmcd8ht861fqvsj03exc87xnxt3rgdcl7m3atv5krdoqgsfopmp0dizj9wk9mrnd3f5rb1t9eapjszbmq3jp01gj7e4wzd8sn6pjivk2iofg7z5mgt2mv6kq3bb8sxala4qdw16vhp4oqlxub711dbrer0mwymv68atyzjlfi2y5ypli8r40u9heh29jxehna484744ugjldx8dr3cf7y2joxbt2523csxqzxyf2zqo2b02i57yinlzhodeiksix9sfazi8pgy2eq9m5z2gfvkh7uu5zagf073kfauqm29z7e9ishomkwxzclug041cqfqdqvovc533y8neba9wafy47uq4ja0jii5p0gjo23e1kozxkcyan7blt69agpo5x2y7juhwh2x79sgbibh53gj0z07070we39t7q4k8e5gmrjtecziqnx9nxhgrlc8bwhx6yadm4dnbmykpnyk26vuraaypsz0s83wiyhm920qyf4i5g6c8e7wbetexmvffv43h2wngllpxw5c71vlhat2wxvh0tlh98za3vcldl7exhwx3ol4v0l9xv7gefneqggvr3j8gvwzpzfasd7ii2hdmd8u91dlmee8ojwet9kzfzkl9hkvzyvuqt9dcusp3w11q1i9w == \s\d\s\w\z\y\c\o\1\d\l\4\u\2\u\q\j\c\q\b\e\f\8\a\5\k\a\9\v\f\x\3\l\p\r\k\n\7\6\d\o\q\f\h\c\w\a\d\h\u\w\3\i\q\s\h\4\s\a\k\r\2\i\z\r\5\m\o\w\o\2\h\y\3\o\a\c\t\z\0\d\9\4\f\w\a\p\x\r\t\k\m\e\5\g\i\h\u\8\v\i\a\3\y\z\2\4\y\u\x\g\m\7\j\q\g\8\l\f\u\q\4\s\4\e\n\6\j\v\l\m\f\4\g\h\3\e\v\9\n\2\b\8\u\5\9\d\5\5\d\8\t\1\o\2\p\8\2\n\c\1\o\k\m\c\x\5\t\q\j\8\7\t\x\o\6\4\d\g\g\0\6\o\n\o\s\0\x\c\0\p\r\g\k\h\z\y\s\j\z\r\m\g\d\p\i\l\i\f\s\p\d\l\m\k\z\q\b\f\6\2\o\1\p\3\l\v\k\m\e\5\e\m\8\z\i\4\t\h\1\a\p\5\7\a\1\f\8\s\0\q\o\v\y\z\p\l\h\8\s\p\p\k\7\y\0\g\u\1\o\q\4\v\e\c\e\2\f\q\7\j\r\9\n\g\2\m\i\i\e\9\w\c\h\m\h\7\5\p\n\k\9\q\h\6\d\7\0\l\w\i\b\q\v\6\e\r\b\1\0\2\j\y\l\c\5\4\5\5\1\j\m\k\6\6\1\1\0\v\j\d\k\j\o\3\4\c\9\f\z\o\7\m\m\w\h\p\v\f\q\2\i\v\8\o\p\1\y\t\b\e\p\r\p\c\r\u\r\m\c\d\8\h\t\8\6\1\f\q\v\s\j\0\3\e\x\c\8\7\x\n\x\t\3\r\g\d\c\l\7\m\3\a\t\v\5\k\r\d\o\q\g\s\f\o\p\m\p\0\d\i\z\j\9\w\k\9\m\r\n\d\3\f\5\r\b\1\t\9\e\a\p\j\s\z\b\m\q\3\j\p\0\1\g\j\7\e\4\w\z\d\8\s\n\6\p\j\i\v\k\2\i\o\f\g\7\z\5\m\g\t\2\m\v\6\k\q\3\b\b\8\s\x\a\l\a\4\q\d\w\1\6\v\h\p\4\o\q\l\x\u\b\7\1\1\d\b\r\e\r\0\m\w\y\m\v\6\8\a\t\y\z\j\l\f\i\2\y\5\y\p\l\i\8\r\4\0\u\9\h\e\h\2\9\j\x\e\h\n\a\4\8\4\7\4\4\u\g\j\l\d\x\8\d\r\3\c\f\7\y\2\j\o\x\b\t\2\5\2\3\c\s\x\q\z\x\y\f\2\z\q\o\2\b\0\2\i\5\7\y\i\n\l\z\h\o\d\e\i\k\s\i\x\9\s\f\a\z\i\8\p\g\y\2\e\q\9\m\5\z\2\g\f\v\k\h\7\u\u\5\z\a\g\f\0\7\3\k\f\a\u\q\m\2\9\z\7\e\9\i\s\h\o\m\k\w\x\z\c\l\u\g\0\4\1\c\q\f\q\d\q\v\o\v\c\5\3\3\y\8\n\e\b\a\9\w\a\f\y\4\7\u\q\4\j\a\0\j\i\i\5\p\0\g\j\o\2\3\e\1\k\o\z\x\k\c\y\a\n\7\b\l\t\6\9\a\g\p\o\5\x\2\y\7\j\u\h\w\h\2\x\7\9\s\g\b\i\b\h\5\3\g\j\0\z\0\7\0\7\0\w\e\3\9\t\7\q\4\k\8\e\5\g\m\r\j\t\e\c\z\i\q\n\x\9\n\x\h\g\r\l\c\8\b\w\h\x\6\y\a\d\m\4\d\n\b\m\y\k\p\n\y\k\2\6\v\u\r\a\a\y\p\s\z\0\s\8\3\w\i\y\h\m\9\2\0\q\y\f\4\i\5\g\6\c\8\e\7\w\b\e\t\e\x\m\v\f\f\v\4\3\h\2\w\n\g\l\l\p\x\w\5\c\7\1\v\l\h\a\t\2\w\x\v\h\0\t\l\h\9\8\z\a\3\v\c\l\d\l\7\e\x\h\w\x\3\o\l\4\v\0\l\9\x\v\7\g\e\f\n\e\q\g\g\v\r\3\j\8\g\v\w\z\p\z\f\a\s\d\7\i\i\2\h\d\m\d\8\u\9\1\d\l\m\e\e\8\o\j\w\e\t\9\k\z\f\z\k\l\9\h\k\v\z\y\v\u\q\t\9\d\c\u\s\p\3\w\1\1\q\1\i\9\w ]] 00:07:37.198 00:19:52 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:37.767 00:19:53 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:37.767 00:19:53 -- dd/uring.sh@75 -- # gen_conf 00:07:37.767 00:19:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:37.767 00:19:53 -- common/autotest_common.sh@10 -- # set +x 00:07:37.767 [2024-09-29 00:19:53.409020] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:37.767 [2024-09-29 00:19:53.409131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:07:37.767 { 00:07:37.767 "subsystems": [ 00:07:37.767 { 00:07:37.767 "subsystem": "bdev", 00:07:37.767 "config": [ 00:07:37.767 { 00:07:37.767 "params": { 00:07:37.767 "block_size": 512, 00:07:37.767 "num_blocks": 1048576, 00:07:37.767 "name": "malloc0" 00:07:37.767 }, 00:07:37.767 "method": "bdev_malloc_create" 00:07:37.767 }, 00:07:37.767 { 00:07:37.767 "params": { 00:07:37.767 "filename": "/dev/zram1", 00:07:37.767 "name": "uring0" 00:07:37.767 }, 00:07:37.767 "method": "bdev_uring_create" 00:07:37.767 }, 00:07:37.767 { 00:07:37.767 "method": "bdev_wait_for_examine" 00:07:37.767 } 00:07:37.767 ] 00:07:37.767 } 00:07:37.767 ] 00:07:37.767 } 00:07:37.767 [2024-09-29 00:19:53.540747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.767 [2024-09-29 00:19:53.595628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.274  Copying: 166/512 [MB] (166 MBps) Copying: 332/512 [MB] (165 MBps) Copying: 494/512 [MB] (162 MBps) Copying: 512/512 [MB] (average 164 MBps) 00:07:41.274 00:07:41.274 00:19:57 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:41.274 00:19:57 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:41.274 00:19:57 -- dd/uring.sh@87 -- # : 00:07:41.274 00:19:57 -- dd/uring.sh@87 -- # : 00:07:41.275 00:19:57 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:41.275 00:19:57 -- dd/uring.sh@87 -- # gen_conf 00:07:41.275 00:19:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:41.275 00:19:57 -- common/autotest_common.sh@10 -- # set +x 00:07:41.534 [2024-09-29 00:19:57.172792] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:41.534 [2024-09-29 00:19:57.172892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59057 ] 00:07:41.534 { 00:07:41.534 "subsystems": [ 00:07:41.534 { 00:07:41.534 "subsystem": "bdev", 00:07:41.534 "config": [ 00:07:41.534 { 00:07:41.534 "params": { 00:07:41.534 "block_size": 512, 00:07:41.534 "num_blocks": 1048576, 00:07:41.534 "name": "malloc0" 00:07:41.534 }, 00:07:41.534 "method": "bdev_malloc_create" 00:07:41.534 }, 00:07:41.534 { 00:07:41.534 "params": { 00:07:41.534 "filename": "/dev/zram1", 00:07:41.534 "name": "uring0" 00:07:41.534 }, 00:07:41.534 "method": "bdev_uring_create" 00:07:41.534 }, 00:07:41.534 { 00:07:41.534 "params": { 00:07:41.534 "name": "uring0" 00:07:41.534 }, 00:07:41.534 "method": "bdev_uring_delete" 00:07:41.534 }, 00:07:41.534 { 00:07:41.534 "method": "bdev_wait_for_examine" 00:07:41.534 } 00:07:41.534 ] 00:07:41.534 } 00:07:41.534 ] 00:07:41.534 } 00:07:41.534 [2024-09-29 00:19:57.309503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.534 [2024-09-29 00:19:57.363134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.080  Copying: 0/0 [B] (average 0 Bps) 00:07:42.080 00:07:42.080 00:19:57 -- dd/uring.sh@94 -- # : 00:07:42.080 00:19:57 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.080 00:19:57 -- dd/uring.sh@94 -- # gen_conf 00:07:42.080 00:19:57 -- common/autotest_common.sh@640 -- # local es=0 00:07:42.080 00:19:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:42.080 00:19:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.080 00:19:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.080 00:19:57 -- common/autotest_common.sh@10 -- # set +x 00:07:42.080 00:19:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.080 00:19:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.080 00:19:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.080 00:19:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.080 00:19:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.080 00:19:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.080 00:19:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.080 00:19:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.080 [2024-09-29 00:19:57.858518] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:42.080 [2024-09-29 00:19:57.858635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 00:07:42.080 { 00:07:42.080 "subsystems": [ 00:07:42.080 { 00:07:42.080 "subsystem": "bdev", 00:07:42.080 "config": [ 00:07:42.080 { 00:07:42.080 "params": { 00:07:42.080 "block_size": 512, 00:07:42.080 "num_blocks": 1048576, 00:07:42.080 "name": "malloc0" 00:07:42.080 }, 00:07:42.080 "method": "bdev_malloc_create" 00:07:42.080 }, 00:07:42.080 { 00:07:42.080 "params": { 00:07:42.080 "filename": "/dev/zram1", 00:07:42.080 "name": "uring0" 00:07:42.080 }, 00:07:42.080 "method": "bdev_uring_create" 00:07:42.080 }, 00:07:42.080 { 00:07:42.080 "params": { 00:07:42.080 "name": "uring0" 00:07:42.080 }, 00:07:42.080 "method": "bdev_uring_delete" 00:07:42.080 }, 00:07:42.080 { 00:07:42.080 "method": "bdev_wait_for_examine" 00:07:42.080 } 00:07:42.080 ] 00:07:42.080 } 00:07:42.080 ] 00:07:42.080 } 00:07:42.339 [2024-09-29 00:19:57.994009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.339 [2024-09-29 00:19:58.045601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.598 [2024-09-29 00:19:58.190118] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:42.598 [2024-09-29 00:19:58.190204] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:42.598 [2024-09-29 00:19:58.190232] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:07:42.598 [2024-09-29 00:19:58.190241] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.598 [2024-09-29 00:19:58.361860] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.857 00:19:58 -- common/autotest_common.sh@643 -- # es=237 00:07:42.857 00:19:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:42.857 00:19:58 -- common/autotest_common.sh@652 -- # es=109 00:07:42.857 00:19:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:42.857 00:19:58 -- common/autotest_common.sh@660 -- # es=1 00:07:42.857 00:19:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:42.857 00:19:58 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:42.857 00:19:58 -- dd/common.sh@172 -- # local id=1 00:07:42.857 00:19:58 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:42.857 00:19:58 -- dd/common.sh@176 -- # echo 1 00:07:42.857 00:19:58 -- dd/common.sh@177 -- # echo 1 00:07:42.857 00:19:58 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:42.857 00:07:42.857 real 0m13.935s 00:07:42.857 user 0m7.999s 00:07:42.857 sys 0m5.371s 00:07:42.857 00:19:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.857 00:19:58 -- common/autotest_common.sh@10 -- # set +x 00:07:42.857 ************************************ 00:07:42.857 END TEST dd_uring_copy 00:07:42.857 ************************************ 00:07:43.116 00:07:43.116 real 0m14.068s 00:07:43.116 user 0m8.051s 00:07:43.116 sys 0m5.452s 00:07:43.116 00:19:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.116 00:19:58 -- common/autotest_common.sh@10 -- # set +x 00:07:43.116 ************************************ 00:07:43.117 END TEST spdk_dd_uring 00:07:43.117 ************************************ 00:07:43.117 00:19:58 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:43.117 00:19:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.117 00:19:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.117 00:19:58 -- common/autotest_common.sh@10 -- # set +x 00:07:43.117 ************************************ 00:07:43.117 START TEST spdk_dd_sparse 00:07:43.117 ************************************ 00:07:43.117 00:19:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:43.117 * Looking for test storage... 00:07:43.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.117 00:19:58 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.117 00:19:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.117 00:19:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.117 00:19:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.117 00:19:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.117 00:19:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.117 00:19:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.117 00:19:58 -- paths/export.sh@5 -- # export PATH 00:07:43.117 00:19:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.117 00:19:58 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:43.117 00:19:58 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:43.117 00:19:58 -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:43.117 00:19:58 -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:43.117 00:19:58 -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:43.117 00:19:58 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:43.117 00:19:58 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:43.117 00:19:58 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:43.117 00:19:58 -- dd/sparse.sh@118 -- # prepare 00:07:43.117 00:19:58 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:43.117 00:19:58 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:43.117 1+0 records in 00:07:43.117 1+0 records out 00:07:43.117 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00573473 s, 731 MB/s 00:07:43.117 00:19:58 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:43.117 1+0 records in 00:07:43.117 1+0 records out 00:07:43.117 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00607964 s, 690 MB/s 00:07:43.117 00:19:58 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:43.117 1+0 records in 00:07:43.117 1+0 records out 00:07:43.117 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00637393 s, 658 MB/s 00:07:43.117 00:19:58 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:43.117 00:19:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.117 00:19:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.117 00:19:58 -- common/autotest_common.sh@10 -- # set +x 00:07:43.117 ************************************ 00:07:43.117 START TEST dd_sparse_file_to_file 00:07:43.117 ************************************ 00:07:43.117 00:19:58 -- common/autotest_common.sh@1104 -- # file_to_file 00:07:43.117 00:19:58 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:43.117 00:19:58 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:43.117 00:19:58 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:43.117 00:19:58 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:43.117 00:19:58 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:43.117 00:19:58 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:43.117 00:19:58 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:43.117 00:19:58 -- dd/sparse.sh@41 -- # gen_conf 00:07:43.117 00:19:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.117 00:19:58 -- common/autotest_common.sh@10 -- # set +x 00:07:43.117 { 00:07:43.117 "subsystems": [ 00:07:43.117 { 00:07:43.117 "subsystem": "bdev", 00:07:43.117 "config": [ 00:07:43.117 { 00:07:43.117 "params": { 00:07:43.117 "block_size": 4096, 00:07:43.117 "filename": "dd_sparse_aio_disk", 00:07:43.117 "name": "dd_aio" 00:07:43.117 }, 00:07:43.117 "method": "bdev_aio_create" 00:07:43.117 }, 00:07:43.117 { 00:07:43.117 "params": { 00:07:43.117 "lvs_name": "dd_lvstore", 00:07:43.117 "bdev_name": "dd_aio" 00:07:43.117 }, 00:07:43.117 "method": "bdev_lvol_create_lvstore" 00:07:43.117 }, 00:07:43.117 { 00:07:43.117 "method": "bdev_wait_for_examine" 00:07:43.117 } 00:07:43.117 ] 00:07:43.117 } 00:07:43.117 ] 00:07:43.117 } 00:07:43.375 [2024-09-29 00:19:58.967452] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:43.375 [2024-09-29 00:19:58.967575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:07:43.375 [2024-09-29 00:19:59.113841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.375 [2024-09-29 00:19:59.168416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.892  Copying: 12/36 [MB] (average 1714 MBps) 00:07:43.892 00:07:43.892 00:19:59 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:43.892 00:19:59 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:43.892 00:19:59 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:43.892 00:19:59 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:43.892 00:19:59 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:43.892 00:19:59 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:43.892 00:19:59 -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:43.892 00:19:59 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:43.892 00:19:59 -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:43.892 00:19:59 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:43.892 00:07:43.892 real 0m0.627s 00:07:43.892 user 0m0.378s 00:07:43.892 sys 0m0.145s 00:07:43.892 00:19:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.892 ************************************ 00:07:43.892 END TEST dd_sparse_file_to_file 00:07:43.892 ************************************ 00:07:43.892 00:19:59 -- common/autotest_common.sh@10 -- # set +x 00:07:43.892 00:19:59 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:43.892 00:19:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.892 00:19:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.892 00:19:59 -- common/autotest_common.sh@10 -- # set +x 00:07:43.892 ************************************ 00:07:43.892 START TEST dd_sparse_file_to_bdev 00:07:43.892 ************************************ 00:07:43.892 00:19:59 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:07:43.892 00:19:59 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:43.892 00:19:59 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:43.892 00:19:59 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:07:43.892 00:19:59 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:43.892 00:19:59 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:43.892 00:19:59 -- dd/sparse.sh@73 -- # gen_conf 00:07:43.892 00:19:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:43.892 00:19:59 -- common/autotest_common.sh@10 -- # set +x 00:07:43.892 [2024-09-29 00:19:59.624509] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:43.892 [2024-09-29 00:19:59.624618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59211 ] 00:07:43.892 { 00:07:43.892 "subsystems": [ 00:07:43.892 { 00:07:43.892 "subsystem": "bdev", 00:07:43.892 "config": [ 00:07:43.892 { 00:07:43.892 "params": { 00:07:43.892 "block_size": 4096, 00:07:43.892 "filename": "dd_sparse_aio_disk", 00:07:43.892 "name": "dd_aio" 00:07:43.892 }, 00:07:43.892 "method": "bdev_aio_create" 00:07:43.892 }, 00:07:43.892 { 00:07:43.892 "params": { 00:07:43.892 "lvs_name": "dd_lvstore", 00:07:43.892 "lvol_name": "dd_lvol", 00:07:43.892 "size": 37748736, 00:07:43.892 "thin_provision": true 00:07:43.892 }, 00:07:43.892 "method": "bdev_lvol_create" 00:07:43.892 }, 00:07:43.892 { 00:07:43.892 "method": "bdev_wait_for_examine" 00:07:43.892 } 00:07:43.892 ] 00:07:43.892 } 00:07:43.892 ] 00:07:43.892 } 00:07:44.152 [2024-09-29 00:19:59.761670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.152 [2024-09-29 00:19:59.816722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.152 [2024-09-29 00:19:59.872989] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:07:44.152  Copying: 12/36 [MB] (average 521 MBps)[2024-09-29 00:19:59.911122] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:07:44.410 00:07:44.410 00:07:44.410 00:07:44.410 real 0m0.551s 00:07:44.410 user 0m0.367s 00:07:44.410 sys 0m0.106s 00:07:44.410 00:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.410 ************************************ 00:07:44.410 END TEST dd_sparse_file_to_bdev 00:07:44.410 ************************************ 00:07:44.410 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:44.410 00:20:00 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:44.410 00:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.410 00:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.410 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:44.410 ************************************ 00:07:44.410 START TEST dd_sparse_bdev_to_file 00:07:44.410 ************************************ 00:07:44.410 00:20:00 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:07:44.410 00:20:00 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:44.410 00:20:00 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:44.410 00:20:00 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:44.410 00:20:00 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:44.410 00:20:00 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:44.410 00:20:00 -- dd/sparse.sh@91 -- # gen_conf 00:07:44.410 00:20:00 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.410 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:44.410 [2024-09-29 00:20:00.222026] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:44.410 [2024-09-29 00:20:00.222108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59242 ] 00:07:44.410 { 00:07:44.410 "subsystems": [ 00:07:44.410 { 00:07:44.410 "subsystem": "bdev", 00:07:44.410 "config": [ 00:07:44.410 { 00:07:44.410 "params": { 00:07:44.410 "block_size": 4096, 00:07:44.410 "filename": "dd_sparse_aio_disk", 00:07:44.410 "name": "dd_aio" 00:07:44.410 }, 00:07:44.410 "method": "bdev_aio_create" 00:07:44.410 }, 00:07:44.410 { 00:07:44.410 "method": "bdev_wait_for_examine" 00:07:44.410 } 00:07:44.410 ] 00:07:44.410 } 00:07:44.410 ] 00:07:44.410 } 00:07:44.669 [2024-09-29 00:20:00.357593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.669 [2024-09-29 00:20:00.411160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.928  Copying: 12/36 [MB] (average 1333 MBps) 00:07:44.928 00:07:44.928 00:20:00 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:44.928 00:20:00 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:44.928 00:20:00 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:44.928 00:20:00 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:44.928 00:20:00 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:44.928 00:20:00 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:44.928 00:20:00 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:44.928 00:20:00 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:44.928 00:20:00 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:44.928 00:20:00 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:44.928 00:07:44.928 real 0m0.546s 00:07:44.928 user 0m0.340s 00:07:44.928 sys 0m0.123s 00:07:44.928 00:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.928 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:44.928 ************************************ 00:07:44.928 END TEST dd_sparse_bdev_to_file 00:07:44.928 ************************************ 00:07:44.928 00:20:00 -- dd/sparse.sh@1 -- # cleanup 00:07:44.928 00:20:00 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:44.928 00:20:00 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:44.928 00:20:00 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:44.928 00:20:00 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:45.187 00:07:45.187 real 0m2.008s 00:07:45.187 user 0m1.173s 00:07:45.187 sys 0m0.566s 00:07:45.187 00:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.187 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:45.187 ************************************ 00:07:45.187 END TEST spdk_dd_sparse 00:07:45.187 ************************************ 00:07:45.187 00:20:00 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:45.187 00:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.187 00:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.187 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:45.187 ************************************ 00:07:45.187 START TEST spdk_dd_negative 00:07:45.187 ************************************ 00:07:45.187 00:20:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:45.187 * Looking for test storage... 00:07:45.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:45.187 00:20:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.187 00:20:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.187 00:20:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.187 00:20:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.187 00:20:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.187 00:20:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.187 00:20:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.187 00:20:00 -- paths/export.sh@5 -- # export PATH 00:07:45.187 00:20:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.187 00:20:00 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.187 00:20:00 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.187 00:20:00 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.187 00:20:00 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.187 00:20:00 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:45.187 00:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.187 00:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.187 00:20:00 -- common/autotest_common.sh@10 -- # set +x 00:07:45.187 ************************************ 00:07:45.187 START TEST dd_invalid_arguments 00:07:45.187 ************************************ 00:07:45.187 00:20:00 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:07:45.187 00:20:00 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:45.187 00:20:00 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.187 00:20:00 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:45.187 00:20:00 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.187 00:20:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.187 00:20:00 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.187 00:20:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.187 00:20:00 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.187 00:20:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.187 00:20:00 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.187 00:20:00 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.187 00:20:00 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:45.187 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:45.187 options: 00:07:45.187 -c, --config JSON config file (default none) 00:07:45.187 --json JSON config file (default none) 00:07:45.187 --json-ignore-init-errors 00:07:45.187 don't exit on invalid config entry 00:07:45.187 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:45.187 -g, --single-file-segments 00:07:45.187 force creating just one hugetlbfs file 00:07:45.187 -h, --help show this usage 00:07:45.187 -i, --shm-id shared memory ID (optional) 00:07:45.187 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:45.187 --lcores lcore to CPU mapping list. The list is in the format: 00:07:45.187 [<,lcores[@CPUs]>...] 00:07:45.187 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:45.187 Within the group, '-' is used for range separator, 00:07:45.188 ',' is used for single number separator. 00:07:45.188 '( )' can be omitted for single element group, 00:07:45.188 '@' can be omitted if cpus and lcores have the same value 00:07:45.188 -n, --mem-channels channel number of memory channels used for DPDK 00:07:45.188 -p, --main-core main (primary) core for DPDK 00:07:45.188 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:45.188 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:45.188 --disable-cpumask-locks Disable CPU core lock files. 00:07:45.188 --silence-noticelog disable notice level logging to stderr 00:07:45.188 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:45.188 -u, --no-pci disable PCI access 00:07:45.188 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:45.188 --max-delay maximum reactor delay (in microseconds) 00:07:45.188 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:45.188 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:45.188 -R, --huge-unlink unlink huge files after initialization 00:07:45.188 -v, --version print SPDK version 00:07:45.188 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:45.188 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:45.188 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:45.188 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:45.188 Tracepoints vary in size and can use more than one trace entry. 00:07:45.188 --rpcs-allowed comma-separated list of permitted RPCS 00:07:45.188 --env-context Opaque context for use of the env implementation 00:07:45.188 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:45.188 --no-huge run without using hugepages 00:07:45.188 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:45.188 -e, --tpoint-group [:] 00:07:45.188 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:07:45.188 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:45.188 [2024-09-29 00:20:00.984002] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:07:45.188 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:45.188 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:45.188 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:45.188 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:45.188 [--------- DD Options ---------] 00:07:45.188 --if Input file. Must specify either --if or --ib. 00:07:45.188 --ib Input bdev. Must specifier either --if or --ib 00:07:45.188 --of Output file. Must specify either --of or --ob. 00:07:45.188 --ob Output bdev. Must specify either --of or --ob. 00:07:45.188 --iflag Input file flags. 00:07:45.188 --oflag Output file flags. 00:07:45.188 --bs I/O unit size (default: 4096) 00:07:45.188 --qd Queue depth (default: 2) 00:07:45.188 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:45.188 --skip Skip this many I/O units at start of input. (default: 0) 00:07:45.188 --seek Skip this many I/O units at start of output. (default: 0) 00:07:45.188 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:45.188 --sparse Enable hole skipping in input target 00:07:45.188 Available iflag and oflag values: 00:07:45.188 append - append mode 00:07:45.188 direct - use direct I/O for data 00:07:45.188 directory - fail unless a directory 00:07:45.188 dsync - use synchronized I/O for data 00:07:45.188 noatime - do not update access time 00:07:45.188 noctty - do not assign controlling terminal from file 00:07:45.188 nofollow - do not follow symlinks 00:07:45.188 nonblock - use non-blocking I/O 00:07:45.188 sync - use synchronized I/O for data and metadata 00:07:45.188 00:20:01 -- common/autotest_common.sh@643 -- # es=2 00:07:45.188 00:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.188 00:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.188 00:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.188 00:07:45.188 real 0m0.075s 00:07:45.188 user 0m0.043s 00:07:45.188 sys 0m0.031s 00:07:45.188 00:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.188 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.188 ************************************ 00:07:45.188 END TEST dd_invalid_arguments 00:07:45.188 ************************************ 00:07:45.447 00:20:01 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:45.447 00:20:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.447 00:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.447 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.447 ************************************ 00:07:45.447 START TEST dd_double_input 00:07:45.447 ************************************ 00:07:45.447 00:20:01 -- common/autotest_common.sh@1104 -- # double_input 00:07:45.447 00:20:01 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:45.447 00:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.447 00:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:45.447 00:20:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.447 00:20:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.447 00:20:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.447 00:20:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:45.447 [2024-09-29 00:20:01.106686] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:45.447 00:20:01 -- common/autotest_common.sh@643 -- # es=22 00:07:45.447 00:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.447 00:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.447 00:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.447 00:07:45.447 real 0m0.070s 00:07:45.447 user 0m0.046s 00:07:45.447 sys 0m0.022s 00:07:45.447 00:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.447 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.447 ************************************ 00:07:45.447 END TEST dd_double_input 00:07:45.447 ************************************ 00:07:45.447 00:20:01 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:45.447 00:20:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.447 00:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.447 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.447 ************************************ 00:07:45.447 START TEST dd_double_output 00:07:45.447 ************************************ 00:07:45.447 00:20:01 -- common/autotest_common.sh@1104 -- # double_output 00:07:45.447 00:20:01 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:45.447 00:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.447 00:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:45.447 00:20:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.447 00:20:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.447 00:20:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.447 00:20:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.447 00:20:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:45.447 [2024-09-29 00:20:01.232374] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:45.447 00:20:01 -- common/autotest_common.sh@643 -- # es=22 00:07:45.447 00:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.447 00:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.447 00:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.447 00:07:45.447 real 0m0.070s 00:07:45.447 user 0m0.042s 00:07:45.447 sys 0m0.027s 00:07:45.447 00:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.447 ************************************ 00:07:45.447 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.447 END TEST dd_double_output 00:07:45.447 ************************************ 00:07:45.447 00:20:01 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:45.447 00:20:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.447 00:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.706 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.706 ************************************ 00:07:45.706 START TEST dd_no_input 00:07:45.706 ************************************ 00:07:45.706 00:20:01 -- common/autotest_common.sh@1104 -- # no_input 00:07:45.706 00:20:01 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:45.706 00:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.706 00:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:45.706 00:20:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.706 00:20:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.706 00:20:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.706 00:20:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:45.706 [2024-09-29 00:20:01.363013] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:07:45.706 00:20:01 -- common/autotest_common.sh@643 -- # es=22 00:07:45.706 00:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.706 00:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.706 00:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.706 00:07:45.706 real 0m0.078s 00:07:45.706 user 0m0.049s 00:07:45.706 sys 0m0.028s 00:07:45.706 00:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.706 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.706 ************************************ 00:07:45.706 END TEST dd_no_input 00:07:45.706 ************************************ 00:07:45.706 00:20:01 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:45.706 00:20:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.706 00:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.706 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.706 ************************************ 00:07:45.706 START TEST dd_no_output 00:07:45.706 ************************************ 00:07:45.706 00:20:01 -- common/autotest_common.sh@1104 -- # no_output 00:07:45.706 00:20:01 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.706 00:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.706 00:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.706 00:20:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.706 00:20:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.706 00:20:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.706 00:20:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.706 00:20:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.706 [2024-09-29 00:20:01.485045] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:07:45.706 00:20:01 -- common/autotest_common.sh@643 -- # es=22 00:07:45.706 00:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.706 00:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.706 00:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.706 00:07:45.706 real 0m0.072s 00:07:45.706 user 0m0.040s 00:07:45.706 sys 0m0.031s 00:07:45.706 00:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.706 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.706 ************************************ 00:07:45.706 END TEST dd_no_output 00:07:45.706 ************************************ 00:07:45.706 00:20:01 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:45.706 00:20:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.706 00:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.706 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.965 ************************************ 00:07:45.965 START TEST dd_wrong_blocksize 00:07:45.965 ************************************ 00:07:45.965 00:20:01 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:07:45.965 00:20:01 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:45.965 00:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.965 00:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:45.965 00:20:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.965 00:20:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.965 00:20:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.965 00:20:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:45.965 [2024-09-29 00:20:01.610268] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:07:45.965 00:20:01 -- common/autotest_common.sh@643 -- # es=22 00:07:45.965 00:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:45.965 00:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:45.965 00:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:45.965 00:07:45.965 real 0m0.074s 00:07:45.965 user 0m0.048s 00:07:45.965 sys 0m0.024s 00:07:45.965 00:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.965 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.965 ************************************ 00:07:45.965 END TEST dd_wrong_blocksize 00:07:45.965 ************************************ 00:07:45.965 00:20:01 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:45.965 00:20:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:45.965 00:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.965 00:20:01 -- common/autotest_common.sh@10 -- # set +x 00:07:45.965 ************************************ 00:07:45.965 START TEST dd_smaller_blocksize 00:07:45.965 ************************************ 00:07:45.965 00:20:01 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:07:45.965 00:20:01 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:45.965 00:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:07:45.965 00:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:45.965 00:20:01 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.965 00:20:01 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:45.965 00:20:01 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.965 00:20:01 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.965 00:20:01 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:45.965 [2024-09-29 00:20:01.737583] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:45.965 [2024-09-29 00:20:01.737680] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:07:46.224 [2024-09-29 00:20:01.872276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.224 [2024-09-29 00:20:01.941560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.483 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:46.483 [2024-09-29 00:20:02.239543] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:46.483 [2024-09-29 00:20:02.239598] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.483 [2024-09-29 00:20:02.302444] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:46.740 00:20:02 -- common/autotest_common.sh@643 -- # es=244 00:07:46.740 00:20:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.740 00:20:02 -- common/autotest_common.sh@652 -- # es=116 00:07:46.740 00:20:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:46.740 00:20:02 -- common/autotest_common.sh@660 -- # es=1 00:07:46.740 00:20:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.740 00:07:46.740 real 0m0.722s 00:07:46.740 user 0m0.329s 00:07:46.740 sys 0m0.288s 00:07:46.740 00:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.740 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.740 ************************************ 00:07:46.740 END TEST dd_smaller_blocksize 00:07:46.740 ************************************ 00:07:46.740 00:20:02 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:46.740 00:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.740 00:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.740 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.740 ************************************ 00:07:46.740 START TEST dd_invalid_count 00:07:46.740 ************************************ 00:07:46.740 00:20:02 -- common/autotest_common.sh@1104 -- # invalid_count 00:07:46.740 00:20:02 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:46.740 00:20:02 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.741 00:20:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:46.741 00:20:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.741 00:20:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.741 00:20:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.741 00:20:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.741 00:20:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:46.741 [2024-09-29 00:20:02.511838] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:07:46.741 00:20:02 -- common/autotest_common.sh@643 -- # es=22 00:07:46.741 00:20:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.741 00:20:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:46.741 00:20:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.741 00:07:46.741 real 0m0.073s 00:07:46.741 user 0m0.037s 00:07:46.741 sys 0m0.035s 00:07:46.741 00:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.741 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.741 ************************************ 00:07:46.741 END TEST dd_invalid_count 00:07:46.741 ************************************ 00:07:46.741 00:20:02 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:46.741 00:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.741 00:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.741 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.741 ************************************ 00:07:46.741 START TEST dd_invalid_oflag 00:07:46.741 ************************************ 00:07:46.741 00:20:02 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:07:46.741 00:20:02 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:46.741 00:20:02 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.741 00:20:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:46.741 00:20:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.741 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.741 00:20:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.998 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.998 00:20:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.998 00:20:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.998 00:20:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:46.998 [2024-09-29 00:20:02.637617] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:07:46.998 00:20:02 -- common/autotest_common.sh@643 -- # es=22 00:07:46.998 00:20:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.998 00:20:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:46.998 00:20:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.998 00:07:46.998 real 0m0.071s 00:07:46.998 user 0m0.043s 00:07:46.998 sys 0m0.027s 00:07:46.998 00:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.998 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 ************************************ 00:07:46.998 END TEST dd_invalid_oflag 00:07:46.998 ************************************ 00:07:46.998 00:20:02 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:46.998 00:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.998 00:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.998 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 ************************************ 00:07:46.998 START TEST dd_invalid_iflag 00:07:46.998 ************************************ 00:07:46.998 00:20:02 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:07:46.998 00:20:02 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:46.998 00:20:02 -- common/autotest_common.sh@640 -- # local es=0 00:07:46.998 00:20:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:46.998 00:20:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.998 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.998 00:20:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.998 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.998 00:20:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.998 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:46.998 00:20:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.998 00:20:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.998 00:20:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:46.999 [2024-09-29 00:20:02.781779] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:07:46.999 00:20:02 -- common/autotest_common.sh@643 -- # es=22 00:07:46.999 00:20:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:46.999 00:20:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:46.999 00:20:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:46.999 00:07:46.999 real 0m0.092s 00:07:46.999 user 0m0.063s 00:07:46.999 sys 0m0.027s 00:07:46.999 00:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.999 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 ************************************ 00:07:46.999 END TEST dd_invalid_iflag 00:07:46.999 ************************************ 00:07:46.999 00:20:02 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:46.999 00:20:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.999 00:20:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.999 00:20:02 -- common/autotest_common.sh@10 -- # set +x 00:07:47.257 ************************************ 00:07:47.257 START TEST dd_unknown_flag 00:07:47.257 ************************************ 00:07:47.257 00:20:02 -- common/autotest_common.sh@1104 -- # unknown_flag 00:07:47.257 00:20:02 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:47.257 00:20:02 -- common/autotest_common.sh@640 -- # local es=0 00:07:47.257 00:20:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:47.257 00:20:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.257 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.257 00:20:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.257 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.257 00:20:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.257 00:20:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.257 00:20:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.257 00:20:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.258 00:20:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:47.258 [2024-09-29 00:20:02.913161] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:47.258 [2024-09-29 00:20:02.913265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59550 ] 00:07:47.258 [2024-09-29 00:20:03.047297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.258 [2024-09-29 00:20:03.094839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.516 [2024-09-29 00:20:03.139029] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:07:47.516 [2024-09-29 00:20:03.139115] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:47.516 [2024-09-29 00:20:03.139125] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:47.516 [2024-09-29 00:20:03.139135] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.516 [2024-09-29 00:20:03.200360] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:47.516 00:20:03 -- common/autotest_common.sh@643 -- # es=236 00:07:47.516 00:20:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:47.516 00:20:03 -- common/autotest_common.sh@652 -- # es=108 00:07:47.516 00:20:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:47.516 00:20:03 -- common/autotest_common.sh@660 -- # es=1 00:07:47.516 00:20:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:47.516 00:07:47.516 real 0m0.449s 00:07:47.516 user 0m0.255s 00:07:47.516 sys 0m0.089s 00:07:47.516 00:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.517 ************************************ 00:07:47.517 END TEST dd_unknown_flag 00:07:47.517 ************************************ 00:07:47.517 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:47.517 00:20:03 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:47.517 00:20:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:47.517 00:20:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.517 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:47.517 ************************************ 00:07:47.517 START TEST dd_invalid_json 00:07:47.517 ************************************ 00:07:47.517 00:20:03 -- common/autotest_common.sh@1104 -- # invalid_json 00:07:47.517 00:20:03 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:47.517 00:20:03 -- common/autotest_common.sh@640 -- # local es=0 00:07:47.517 00:20:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:47.517 00:20:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.517 00:20:03 -- dd/negative_dd.sh@95 -- # : 00:07:47.517 00:20:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.517 00:20:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.517 00:20:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.517 00:20:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.517 00:20:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:47.517 00:20:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.517 00:20:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.517 00:20:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:47.776 [2024-09-29 00:20:03.407446] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:47.776 [2024-09-29 00:20:03.407569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:07:47.776 [2024-09-29 00:20:03.540854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.776 [2024-09-29 00:20:03.589316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.776 [2024-09-29 00:20:03.589474] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:07:47.776 [2024-09-29 00:20:03.589493] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.776 [2024-09-29 00:20:03.589530] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:48.034 ************************************ 00:07:48.035 END TEST dd_invalid_json 00:07:48.035 ************************************ 00:07:48.035 00:20:03 -- common/autotest_common.sh@643 -- # es=234 00:07:48.035 00:20:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:48.035 00:20:03 -- common/autotest_common.sh@652 -- # es=106 00:07:48.035 00:20:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:48.035 00:20:03 -- common/autotest_common.sh@660 -- # es=1 00:07:48.035 00:20:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:48.035 00:07:48.035 real 0m0.327s 00:07:48.035 user 0m0.172s 00:07:48.035 sys 0m0.053s 00:07:48.035 00:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.035 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.035 00:07:48.035 real 0m2.896s 00:07:48.035 user 0m1.387s 00:07:48.035 sys 0m1.129s 00:07:48.035 00:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.035 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.035 ************************************ 00:07:48.035 END TEST spdk_dd_negative 00:07:48.035 ************************************ 00:07:48.035 ************************************ 00:07:48.035 END TEST spdk_dd 00:07:48.035 ************************************ 00:07:48.035 00:07:48.035 real 1m6.737s 00:07:48.035 user 0m41.957s 00:07:48.035 sys 0m15.701s 00:07:48.035 00:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.035 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.035 00:20:03 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:48.035 00:20:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:48.035 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.035 00:20:03 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:48.035 00:20:03 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:48.035 00:20:03 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:48.035 00:20:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:48.035 00:20:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.035 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.035 ************************************ 00:07:48.035 START TEST nvmf_tcp 00:07:48.035 ************************************ 00:07:48.035 00:20:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:48.294 * Looking for test storage... 00:07:48.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:48.294 00:20:03 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:48.294 00:20:03 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:48.294 00:20:03 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.294 00:20:03 -- nvmf/common.sh@7 -- # uname -s 00:07:48.294 00:20:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.294 00:20:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.294 00:20:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.294 00:20:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.294 00:20:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.294 00:20:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.294 00:20:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.294 00:20:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.294 00:20:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.294 00:20:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.294 00:20:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:07:48.294 00:20:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:07:48.294 00:20:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.294 00:20:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.294 00:20:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.294 00:20:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.294 00:20:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.294 00:20:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.294 00:20:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.294 00:20:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.294 00:20:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.294 00:20:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.294 00:20:03 -- paths/export.sh@5 -- # export PATH 00:07:48.294 00:20:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.294 00:20:03 -- nvmf/common.sh@46 -- # : 0 00:07:48.294 00:20:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.294 00:20:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.294 00:20:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.294 00:20:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.294 00:20:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.294 00:20:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.294 00:20:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.294 00:20:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.294 00:20:03 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:48.294 00:20:03 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:48.294 00:20:03 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:48.295 00:20:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.295 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.295 00:20:03 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:48.295 00:20:03 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.295 00:20:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:48.295 00:20:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.295 00:20:03 -- common/autotest_common.sh@10 -- # set +x 00:07:48.295 ************************************ 00:07:48.295 START TEST nvmf_host_management 00:07:48.295 ************************************ 00:07:48.295 00:20:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.295 * Looking for test storage... 00:07:48.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.295 00:20:04 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:48.295 00:20:04 -- nvmf/common.sh@7 -- # uname -s 00:07:48.295 00:20:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.295 00:20:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.295 00:20:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.295 00:20:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.295 00:20:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.295 00:20:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.295 00:20:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.295 00:20:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.295 00:20:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.295 00:20:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.295 00:20:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:07:48.295 00:20:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:07:48.295 00:20:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.295 00:20:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.295 00:20:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:48.295 00:20:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:48.295 00:20:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.295 00:20:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.295 00:20:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.295 00:20:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.295 00:20:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.295 00:20:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.295 00:20:04 -- paths/export.sh@5 -- # export PATH 00:07:48.295 00:20:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.295 00:20:04 -- nvmf/common.sh@46 -- # : 0 00:07:48.295 00:20:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.295 00:20:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.295 00:20:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.295 00:20:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.295 00:20:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.295 00:20:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.295 00:20:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.295 00:20:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.295 00:20:04 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.295 00:20:04 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.295 00:20:04 -- target/host_management.sh@104 -- # nvmftestinit 00:07:48.295 00:20:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:48.295 00:20:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.295 00:20:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:48.295 00:20:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:48.295 00:20:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:48.295 00:20:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.295 00:20:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.295 00:20:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.295 00:20:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:48.295 00:20:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:48.295 00:20:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:48.295 00:20:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:48.295 00:20:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:48.295 00:20:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:48.295 00:20:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.295 00:20:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.295 00:20:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:48.295 00:20:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:48.295 00:20:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:48.295 00:20:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:48.295 00:20:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:48.295 00:20:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.295 00:20:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:48.295 00:20:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:48.295 00:20:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:48.295 00:20:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:48.295 00:20:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:48.295 Cannot find device "nvmf_init_br" 00:07:48.295 00:20:04 -- nvmf/common.sh@153 -- # true 00:07:48.295 00:20:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:48.295 Cannot find device "nvmf_tgt_br" 00:07:48.295 00:20:04 -- nvmf/common.sh@154 -- # true 00:07:48.295 00:20:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.295 Cannot find device "nvmf_tgt_br2" 00:07:48.295 00:20:04 -- nvmf/common.sh@155 -- # true 00:07:48.295 00:20:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:48.295 Cannot find device "nvmf_init_br" 00:07:48.295 00:20:04 -- nvmf/common.sh@156 -- # true 00:07:48.295 00:20:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:48.554 Cannot find device "nvmf_tgt_br" 00:07:48.554 00:20:04 -- nvmf/common.sh@157 -- # true 00:07:48.554 00:20:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:48.554 Cannot find device "nvmf_tgt_br2" 00:07:48.554 00:20:04 -- nvmf/common.sh@158 -- # true 00:07:48.554 00:20:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:48.554 Cannot find device "nvmf_br" 00:07:48.554 00:20:04 -- nvmf/common.sh@159 -- # true 00:07:48.554 00:20:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:48.554 Cannot find device "nvmf_init_if" 00:07:48.554 00:20:04 -- nvmf/common.sh@160 -- # true 00:07:48.554 00:20:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.554 00:20:04 -- nvmf/common.sh@161 -- # true 00:07:48.554 00:20:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:48.554 00:20:04 -- nvmf/common.sh@162 -- # true 00:07:48.554 00:20:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:48.554 00:20:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:48.554 00:20:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:48.554 00:20:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:48.554 00:20:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:48.554 00:20:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:48.554 00:20:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:48.554 00:20:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:48.554 00:20:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:48.554 00:20:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:48.554 00:20:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:48.554 00:20:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:48.554 00:20:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:48.554 00:20:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:48.554 00:20:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:48.554 00:20:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:48.554 00:20:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:48.813 00:20:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:48.813 00:20:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:48.813 00:20:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:48.813 00:20:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:48.813 00:20:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:48.813 00:20:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:48.813 00:20:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:48.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:07:48.813 00:07:48.813 --- 10.0.0.2 ping statistics --- 00:07:48.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.813 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:07:48.813 00:20:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:48.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:48.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:07:48.813 00:07:48.813 --- 10.0.0.3 ping statistics --- 00:07:48.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.813 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:48.813 00:20:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:48.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:48.813 00:07:48.813 --- 10.0.0.1 ping statistics --- 00:07:48.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.813 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:48.813 00:20:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.813 00:20:04 -- nvmf/common.sh@421 -- # return 0 00:07:48.813 00:20:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:48.813 00:20:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.813 00:20:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:48.813 00:20:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:48.813 00:20:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.813 00:20:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:48.813 00:20:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:48.813 00:20:04 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:48.813 00:20:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:48.813 00:20:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.813 00:20:04 -- common/autotest_common.sh@10 -- # set +x 00:07:48.813 ************************************ 00:07:48.813 START TEST nvmf_host_management 00:07:48.813 ************************************ 00:07:48.813 00:20:04 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:07:48.813 00:20:04 -- target/host_management.sh@69 -- # starttarget 00:07:48.813 00:20:04 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:48.813 00:20:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:48.813 00:20:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.813 00:20:04 -- common/autotest_common.sh@10 -- # set +x 00:07:48.813 00:20:04 -- nvmf/common.sh@469 -- # nvmfpid=59835 00:07:48.813 00:20:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:48.813 00:20:04 -- nvmf/common.sh@470 -- # waitforlisten 59835 00:07:48.813 00:20:04 -- common/autotest_common.sh@819 -- # '[' -z 59835 ']' 00:07:48.813 00:20:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.813 00:20:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:48.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.813 00:20:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.813 00:20:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:48.813 00:20:04 -- common/autotest_common.sh@10 -- # set +x 00:07:48.813 [2024-09-29 00:20:04.585923] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:48.813 [2024-09-29 00:20:04.586011] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.072 [2024-09-29 00:20:04.726854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.072 [2024-09-29 00:20:04.799232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.072 [2024-09-29 00:20:04.799648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.072 [2024-09-29 00:20:04.799797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.072 [2024-09-29 00:20:04.800042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.072 [2024-09-29 00:20:04.800455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.072 [2024-09-29 00:20:04.800531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.072 [2024-09-29 00:20:04.800620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:49.072 [2024-09-29 00:20:04.800622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.007 00:20:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.007 00:20:05 -- common/autotest_common.sh@852 -- # return 0 00:07:50.007 00:20:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:50.007 00:20:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:50.007 00:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.007 00:20:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.007 00:20:05 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.007 00:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.007 00:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.007 [2024-09-29 00:20:05.640030] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.007 00:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.007 00:20:05 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.007 00:20:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:50.007 00:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.008 00:20:05 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:50.008 00:20:05 -- target/host_management.sh@23 -- # cat 00:07:50.008 00:20:05 -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.008 00:20:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.008 00:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.008 Malloc0 00:07:50.008 [2024-09-29 00:20:05.722116] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.008 00:20:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.008 00:20:05 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:50.008 00:20:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:50.008 00:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.008 00:20:05 -- target/host_management.sh@73 -- # perfpid=59889 00:07:50.008 00:20:05 -- target/host_management.sh@74 -- # waitforlisten 59889 /var/tmp/bdevperf.sock 00:07:50.008 00:20:05 -- common/autotest_common.sh@819 -- # '[' -z 59889 ']' 00:07:50.008 00:20:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.008 00:20:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.008 00:20:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.008 00:20:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.008 00:20:05 -- common/autotest_common.sh@10 -- # set +x 00:07:50.008 00:20:05 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:50.008 00:20:05 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:50.008 00:20:05 -- nvmf/common.sh@520 -- # config=() 00:07:50.008 00:20:05 -- nvmf/common.sh@520 -- # local subsystem config 00:07:50.008 00:20:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:50.008 00:20:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:50.008 { 00:07:50.008 "params": { 00:07:50.008 "name": "Nvme$subsystem", 00:07:50.008 "trtype": "$TEST_TRANSPORT", 00:07:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.008 "adrfam": "ipv4", 00:07:50.008 "trsvcid": "$NVMF_PORT", 00:07:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.008 "hdgst": ${hdgst:-false}, 00:07:50.008 "ddgst": ${ddgst:-false} 00:07:50.008 }, 00:07:50.008 "method": "bdev_nvme_attach_controller" 00:07:50.008 } 00:07:50.008 EOF 00:07:50.008 )") 00:07:50.008 00:20:05 -- nvmf/common.sh@542 -- # cat 00:07:50.008 00:20:05 -- nvmf/common.sh@544 -- # jq . 00:07:50.008 00:20:05 -- nvmf/common.sh@545 -- # IFS=, 00:07:50.008 00:20:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:50.008 "params": { 00:07:50.008 "name": "Nvme0", 00:07:50.008 "trtype": "tcp", 00:07:50.008 "traddr": "10.0.0.2", 00:07:50.008 "adrfam": "ipv4", 00:07:50.008 "trsvcid": "4420", 00:07:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.008 "hdgst": false, 00:07:50.008 "ddgst": false 00:07:50.008 }, 00:07:50.008 "method": "bdev_nvme_attach_controller" 00:07:50.008 }' 00:07:50.008 [2024-09-29 00:20:05.829199] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:50.008 [2024-09-29 00:20:05.829602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:07:50.267 [2024-09-29 00:20:05.971390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.267 [2024-09-29 00:20:06.038764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.526 Running I/O for 10 seconds... 00:07:51.101 00:20:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:51.101 00:20:06 -- common/autotest_common.sh@852 -- # return 0 00:07:51.101 00:20:06 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:51.101 00:20:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.101 00:20:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.101 00:20:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.102 00:20:06 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.102 00:20:06 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:51.102 00:20:06 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:51.102 00:20:06 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:51.102 00:20:06 -- target/host_management.sh@52 -- # local ret=1 00:07:51.102 00:20:06 -- target/host_management.sh@53 -- # local i 00:07:51.102 00:20:06 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:51.102 00:20:06 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.102 00:20:06 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.102 00:20:06 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.102 00:20:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.102 00:20:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.102 00:20:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.102 00:20:06 -- target/host_management.sh@55 -- # read_io_count=1823 00:07:51.102 00:20:06 -- target/host_management.sh@58 -- # '[' 1823 -ge 100 ']' 00:07:51.102 00:20:06 -- target/host_management.sh@59 -- # ret=0 00:07:51.102 00:20:06 -- target/host_management.sh@60 -- # break 00:07:51.102 00:20:06 -- target/host_management.sh@64 -- # return 0 00:07:51.102 00:20:06 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.102 00:20:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.102 00:20:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.102 [2024-09-29 00:20:06.895221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 00:20:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.102 [2024-09-29 00:20:06.895656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 00:20:06 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.102 [2024-09-29 00:20:06.895883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.895979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.895994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.896027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 00:20:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.102 [2024-09-29 00:20:06.896058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.896091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.896122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.896153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.896186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 00:20:06 -- common/autotest_common.sh@10 -- # set +x 00:07:51.102 [2024-09-29 00:20:06.896203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.102 [2024-09-29 00:20:06.896229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.102 [2024-09-29 00:20:06.896268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.896971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.896986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.103 [2024-09-29 00:20:06.897643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.103 [2024-09-29 00:20:06.897659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e400 is same with the state(5) to be set 00:07:51.103 [2024-09-29 00:20:06.897738] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x96e400 was disconnected and freed. reset controller. 00:07:51.103 [2024-09-29 00:20:06.897891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.103 [2024-09-29 00:20:06.897916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.104 [2024-09-29 00:20:06.897934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.104 [2024-09-29 00:20:06.897949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.104 [2024-09-29 00:20:06.897965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.104 [2024-09-29 00:20:06.897979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.104 [2024-09-29 00:20:06.897993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.104 [2024-09-29 00:20:06.898009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.104 [2024-09-29 00:20:06.898023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x994150 is same with the state(5) to be set 00:07:51.104 [2024-09-29 00:20:06.899476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:51.104 task offset: 121088 on job bdev=Nvme0n1 fails 00:07:51.104 00:07:51.104 Latency(us) 00:07:51.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.104 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:51.104 Job: Nvme0n1 ended in about 0.72 seconds with error 00:07:51.104 Verification LBA range: start 0x0 length 0x400 00:07:51.104 Nvme0n1 : 0.72 2702.02 168.88 88.59 0.00 22543.86 3366.17 33840.41 00:07:51.104 =================================================================================================================== 00:07:51.104 Total : 2702.02 168.88 88.59 0.00 22543.86 3366.17 33840.41 00:07:51.104 [2024-09-29 00:20:06.901980] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.104 [2024-09-29 00:20:06.902051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x994150 (9): Bad file descriptor 00:07:51.104 00:20:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.104 00:20:06 -- target/host_management.sh@87 -- # sleep 1 00:07:51.104 [2024-09-29 00:20:06.914832] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.479 00:20:07 -- target/host_management.sh@91 -- # kill -9 59889 00:07:52.479 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (59889) - No such process 00:07:52.479 00:20:07 -- target/host_management.sh@91 -- # true 00:07:52.479 00:20:07 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:52.479 00:20:07 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:52.479 00:20:07 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:52.479 00:20:07 -- nvmf/common.sh@520 -- # config=() 00:07:52.479 00:20:07 -- nvmf/common.sh@520 -- # local subsystem config 00:07:52.479 00:20:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:52.479 00:20:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:52.479 { 00:07:52.479 "params": { 00:07:52.479 "name": "Nvme$subsystem", 00:07:52.479 "trtype": "$TEST_TRANSPORT", 00:07:52.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.479 "adrfam": "ipv4", 00:07:52.479 "trsvcid": "$NVMF_PORT", 00:07:52.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.479 "hdgst": ${hdgst:-false}, 00:07:52.479 "ddgst": ${ddgst:-false} 00:07:52.480 }, 00:07:52.480 "method": "bdev_nvme_attach_controller" 00:07:52.480 } 00:07:52.480 EOF 00:07:52.480 )") 00:07:52.480 00:20:07 -- nvmf/common.sh@542 -- # cat 00:07:52.480 00:20:07 -- nvmf/common.sh@544 -- # jq . 00:07:52.480 00:20:07 -- nvmf/common.sh@545 -- # IFS=, 00:07:52.480 00:20:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:52.480 "params": { 00:07:52.480 "name": "Nvme0", 00:07:52.480 "trtype": "tcp", 00:07:52.480 "traddr": "10.0.0.2", 00:07:52.480 "adrfam": "ipv4", 00:07:52.480 "trsvcid": "4420", 00:07:52.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.480 "hdgst": false, 00:07:52.480 "ddgst": false 00:07:52.480 }, 00:07:52.480 "method": "bdev_nvme_attach_controller" 00:07:52.480 }' 00:07:52.480 [2024-09-29 00:20:07.964424] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:52.480 [2024-09-29 00:20:07.964523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:07:52.480 [2024-09-29 00:20:08.104131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.480 [2024-09-29 00:20:08.171010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.480 Running I/O for 1 seconds... 00:07:53.858 00:07:53.858 Latency(us) 00:07:53.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.858 Verification LBA range: start 0x0 length 0x400 00:07:53.858 Nvme0n1 : 1.01 3009.42 188.09 0.00 0.00 20913.54 1191.56 31218.97 00:07:53.858 =================================================================================================================== 00:07:53.858 Total : 3009.42 188.09 0.00 0.00 20913.54 1191.56 31218.97 00:07:53.858 00:20:09 -- target/host_management.sh@101 -- # stoptarget 00:07:53.858 00:20:09 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:53.858 00:20:09 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:53.858 00:20:09 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:53.858 00:20:09 -- target/host_management.sh@40 -- # nvmftestfini 00:07:53.858 00:20:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:53.858 00:20:09 -- nvmf/common.sh@116 -- # sync 00:07:53.858 00:20:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:53.858 00:20:09 -- nvmf/common.sh@119 -- # set +e 00:07:53.858 00:20:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:53.858 00:20:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:53.858 rmmod nvme_tcp 00:07:53.858 rmmod nvme_fabrics 00:07:53.858 rmmod nvme_keyring 00:07:53.858 00:20:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:53.858 00:20:09 -- nvmf/common.sh@123 -- # set -e 00:07:53.858 00:20:09 -- nvmf/common.sh@124 -- # return 0 00:07:53.858 00:20:09 -- nvmf/common.sh@477 -- # '[' -n 59835 ']' 00:07:53.858 00:20:09 -- nvmf/common.sh@478 -- # killprocess 59835 00:07:53.858 00:20:09 -- common/autotest_common.sh@926 -- # '[' -z 59835 ']' 00:07:53.858 00:20:09 -- common/autotest_common.sh@930 -- # kill -0 59835 00:07:53.858 00:20:09 -- common/autotest_common.sh@931 -- # uname 00:07:53.858 00:20:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:53.858 00:20:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 59835 00:07:53.858 killing process with pid 59835 00:07:53.858 00:20:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:07:53.858 00:20:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:07:53.858 00:20:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 59835' 00:07:53.858 00:20:09 -- common/autotest_common.sh@945 -- # kill 59835 00:07:53.858 00:20:09 -- common/autotest_common.sh@950 -- # wait 59835 00:07:54.118 [2024-09-29 00:20:09.833330] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:54.118 00:20:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:54.118 00:20:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:54.118 00:20:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:54.118 00:20:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.118 00:20:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:54.118 00:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.118 00:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.118 00:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.118 00:20:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:54.118 00:07:54.118 real 0m5.371s 00:07:54.118 user 0m22.744s 00:07:54.118 sys 0m1.221s 00:07:54.118 00:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.118 00:20:09 -- common/autotest_common.sh@10 -- # set +x 00:07:54.118 ************************************ 00:07:54.118 END TEST nvmf_host_management 00:07:54.118 ************************************ 00:07:54.118 00:20:09 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:54.118 00:07:54.118 real 0m5.961s 00:07:54.118 user 0m22.868s 00:07:54.118 sys 0m1.444s 00:07:54.118 00:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.118 00:20:09 -- common/autotest_common.sh@10 -- # set +x 00:07:54.118 ************************************ 00:07:54.118 END TEST nvmf_host_management 00:07:54.118 ************************************ 00:07:54.377 00:20:09 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:54.377 00:20:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:54.377 00:20:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.377 00:20:09 -- common/autotest_common.sh@10 -- # set +x 00:07:54.377 ************************************ 00:07:54.377 START TEST nvmf_lvol 00:07:54.377 ************************************ 00:07:54.377 00:20:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:54.377 * Looking for test storage... 00:07:54.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.377 00:20:10 -- nvmf/common.sh@7 -- # uname -s 00:07:54.377 00:20:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.377 00:20:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.377 00:20:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.377 00:20:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.377 00:20:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.377 00:20:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.377 00:20:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.377 00:20:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.377 00:20:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.377 00:20:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.377 00:20:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:07:54.377 00:20:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:07:54.377 00:20:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.377 00:20:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.377 00:20:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.377 00:20:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.377 00:20:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.377 00:20:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.377 00:20:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.377 00:20:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.377 00:20:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.377 00:20:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.377 00:20:10 -- paths/export.sh@5 -- # export PATH 00:07:54.377 00:20:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.377 00:20:10 -- nvmf/common.sh@46 -- # : 0 00:07:54.377 00:20:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:54.377 00:20:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:54.377 00:20:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:54.377 00:20:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.377 00:20:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.377 00:20:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:54.377 00:20:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:54.377 00:20:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.377 00:20:10 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:54.377 00:20:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:54.377 00:20:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.377 00:20:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:54.377 00:20:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:54.377 00:20:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:54.377 00:20:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.377 00:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.377 00:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.377 00:20:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:54.377 00:20:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:54.377 00:20:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:54.377 00:20:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:54.377 00:20:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:54.377 00:20:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:54.377 00:20:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.378 00:20:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.378 00:20:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:54.378 00:20:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:54.378 00:20:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:54.378 00:20:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:54.378 00:20:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:54.378 00:20:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.378 00:20:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:54.378 00:20:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:54.378 00:20:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:54.378 00:20:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:54.378 00:20:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:54.378 00:20:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:54.378 Cannot find device "nvmf_tgt_br" 00:07:54.378 00:20:10 -- nvmf/common.sh@154 -- # true 00:07:54.378 00:20:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.378 Cannot find device "nvmf_tgt_br2" 00:07:54.378 00:20:10 -- nvmf/common.sh@155 -- # true 00:07:54.378 00:20:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:54.378 00:20:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:54.378 Cannot find device "nvmf_tgt_br" 00:07:54.378 00:20:10 -- nvmf/common.sh@157 -- # true 00:07:54.378 00:20:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:54.378 Cannot find device "nvmf_tgt_br2" 00:07:54.378 00:20:10 -- nvmf/common.sh@158 -- # true 00:07:54.378 00:20:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:54.378 00:20:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:54.636 00:20:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.636 00:20:10 -- nvmf/common.sh@161 -- # true 00:07:54.636 00:20:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.636 00:20:10 -- nvmf/common.sh@162 -- # true 00:07:54.636 00:20:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.636 00:20:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.636 00:20:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.636 00:20:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.636 00:20:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.636 00:20:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.636 00:20:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.636 00:20:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:54.636 00:20:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:54.636 00:20:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:54.636 00:20:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:54.636 00:20:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:54.636 00:20:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:54.636 00:20:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.637 00:20:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.637 00:20:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.637 00:20:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:54.637 00:20:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:54.637 00:20:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.637 00:20:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.637 00:20:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.637 00:20:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.637 00:20:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.637 00:20:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:54.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:07:54.637 00:07:54.637 --- 10.0.0.2 ping statistics --- 00:07:54.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.637 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:54.637 00:20:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:54.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:07:54.637 00:07:54.637 --- 10.0.0.3 ping statistics --- 00:07:54.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.637 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:54.637 00:20:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:07:54.637 00:07:54.637 --- 10.0.0.1 ping statistics --- 00:07:54.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.637 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:07:54.637 00:20:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.637 00:20:10 -- nvmf/common.sh@421 -- # return 0 00:07:54.637 00:20:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:54.637 00:20:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.637 00:20:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:54.637 00:20:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:54.637 00:20:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.637 00:20:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:54.637 00:20:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:54.637 00:20:10 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:54.637 00:20:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:54.637 00:20:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:54.637 00:20:10 -- common/autotest_common.sh@10 -- # set +x 00:07:54.637 00:20:10 -- nvmf/common.sh@469 -- # nvmfpid=60155 00:07:54.637 00:20:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:54.637 00:20:10 -- nvmf/common.sh@470 -- # waitforlisten 60155 00:07:54.637 00:20:10 -- common/autotest_common.sh@819 -- # '[' -z 60155 ']' 00:07:54.637 00:20:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.637 00:20:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:54.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.637 00:20:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.637 00:20:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:54.637 00:20:10 -- common/autotest_common.sh@10 -- # set +x 00:07:54.896 [2024-09-29 00:20:10.509400] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:07:54.896 [2024-09-29 00:20:10.509507] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.896 [2024-09-29 00:20:10.649614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.896 [2024-09-29 00:20:10.721212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.896 [2024-09-29 00:20:10.721445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.896 [2024-09-29 00:20:10.721462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.896 [2024-09-29 00:20:10.721473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.896 [2024-09-29 00:20:10.721572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.896 [2024-09-29 00:20:10.722253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.896 [2024-09-29 00:20:10.722305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.832 00:20:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.832 00:20:11 -- common/autotest_common.sh@852 -- # return 0 00:07:55.832 00:20:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:55.832 00:20:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:55.832 00:20:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.832 00:20:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.832 00:20:11 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:56.091 [2024-09-29 00:20:11.724216] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.091 00:20:11 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:56.350 00:20:12 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:56.350 00:20:12 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:56.608 00:20:12 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:56.609 00:20:12 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:56.867 00:20:12 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:57.126 00:20:12 -- target/nvmf_lvol.sh@29 -- # lvs=e943a791-4dbd-48c1-8c09-35db705e35da 00:07:57.126 00:20:12 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e943a791-4dbd-48c1-8c09-35db705e35da lvol 20 00:07:57.385 00:20:13 -- target/nvmf_lvol.sh@32 -- # lvol=1c447218-77ed-4e26-8a61-5f12d7714b81 00:07:57.385 00:20:13 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.643 00:20:13 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c447218-77ed-4e26-8a61-5f12d7714b81 00:07:57.901 00:20:13 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:58.161 [2024-09-29 00:20:13.820936] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.161 00:20:13 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.420 00:20:14 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:58.420 00:20:14 -- target/nvmf_lvol.sh@42 -- # perf_pid=60236 00:07:58.420 00:20:14 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:59.357 00:20:15 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1c447218-77ed-4e26-8a61-5f12d7714b81 MY_SNAPSHOT 00:07:59.616 00:20:15 -- target/nvmf_lvol.sh@47 -- # snapshot=b92dc940-3171-42d8-9835-4b3b645e0776 00:07:59.616 00:20:15 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1c447218-77ed-4e26-8a61-5f12d7714b81 30 00:07:59.875 00:20:15 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone b92dc940-3171-42d8-9835-4b3b645e0776 MY_CLONE 00:08:00.133 00:20:15 -- target/nvmf_lvol.sh@49 -- # clone=180097a3-ca71-481d-a595-9869c2d8910b 00:08:00.133 00:20:15 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 180097a3-ca71-481d-a595-9869c2d8910b 00:08:00.700 00:20:16 -- target/nvmf_lvol.sh@53 -- # wait 60236 00:08:08.844 Initializing NVMe Controllers 00:08:08.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:08.844 Controller IO queue size 128, less than required. 00:08:08.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:08.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:08.844 Initialization complete. Launching workers. 00:08:08.844 ======================================================== 00:08:08.844 Latency(us) 00:08:08.844 Device Information : IOPS MiB/s Average min max 00:08:08.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9939.99 38.83 12887.93 2065.37 47989.17 00:08:08.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9965.59 38.93 12857.66 1136.51 61392.94 00:08:08.844 ======================================================== 00:08:08.844 Total : 19905.59 77.76 12872.77 1136.51 61392.94 00:08:08.844 00:08:08.844 00:20:24 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.844 00:20:24 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1c447218-77ed-4e26-8a61-5f12d7714b81 00:08:09.103 00:20:24 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e943a791-4dbd-48c1-8c09-35db705e35da 00:08:09.361 00:20:25 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:09.361 00:20:25 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:09.361 00:20:25 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:09.361 00:20:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:09.361 00:20:25 -- nvmf/common.sh@116 -- # sync 00:08:09.361 00:20:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:09.361 00:20:25 -- nvmf/common.sh@119 -- # set +e 00:08:09.361 00:20:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:09.361 00:20:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:09.361 rmmod nvme_tcp 00:08:09.361 rmmod nvme_fabrics 00:08:09.361 rmmod nvme_keyring 00:08:09.361 00:20:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:09.361 00:20:25 -- nvmf/common.sh@123 -- # set -e 00:08:09.361 00:20:25 -- nvmf/common.sh@124 -- # return 0 00:08:09.361 00:20:25 -- nvmf/common.sh@477 -- # '[' -n 60155 ']' 00:08:09.361 00:20:25 -- nvmf/common.sh@478 -- # killprocess 60155 00:08:09.361 00:20:25 -- common/autotest_common.sh@926 -- # '[' -z 60155 ']' 00:08:09.361 00:20:25 -- common/autotest_common.sh@930 -- # kill -0 60155 00:08:09.361 00:20:25 -- common/autotest_common.sh@931 -- # uname 00:08:09.361 00:20:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:09.361 00:20:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60155 00:08:09.620 killing process with pid 60155 00:08:09.620 00:20:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:09.620 00:20:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:09.620 00:20:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60155' 00:08:09.620 00:20:25 -- common/autotest_common.sh@945 -- # kill 60155 00:08:09.620 00:20:25 -- common/autotest_common.sh@950 -- # wait 60155 00:08:09.620 00:20:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:09.620 00:20:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:09.620 00:20:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:09.620 00:20:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.620 00:20:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:09.620 00:20:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.620 00:20:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.620 00:20:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.620 00:20:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:09.620 ************************************ 00:08:09.620 END TEST nvmf_lvol 00:08:09.620 ************************************ 00:08:09.620 00:08:09.620 real 0m15.473s 00:08:09.620 user 1m4.327s 00:08:09.620 sys 0m4.532s 00:08:09.620 00:20:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.620 00:20:25 -- common/autotest_common.sh@10 -- # set +x 00:08:09.879 00:20:25 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:09.879 00:20:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:09.879 00:20:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.879 00:20:25 -- common/autotest_common.sh@10 -- # set +x 00:08:09.879 ************************************ 00:08:09.879 START TEST nvmf_lvs_grow 00:08:09.879 ************************************ 00:08:09.879 00:20:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:09.879 * Looking for test storage... 00:08:09.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.879 00:20:25 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.879 00:20:25 -- nvmf/common.sh@7 -- # uname -s 00:08:09.879 00:20:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.879 00:20:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.879 00:20:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.879 00:20:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.879 00:20:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.879 00:20:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.879 00:20:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.879 00:20:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.879 00:20:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.879 00:20:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.879 00:20:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:08:09.879 00:20:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:08:09.879 00:20:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.879 00:20:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.879 00:20:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.879 00:20:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.879 00:20:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.879 00:20:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.879 00:20:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.879 00:20:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.879 00:20:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.879 00:20:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.879 00:20:25 -- paths/export.sh@5 -- # export PATH 00:08:09.879 00:20:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.879 00:20:25 -- nvmf/common.sh@46 -- # : 0 00:08:09.879 00:20:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:09.879 00:20:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:09.879 00:20:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:09.879 00:20:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.879 00:20:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.879 00:20:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:09.879 00:20:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:09.879 00:20:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:09.879 00:20:25 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.879 00:20:25 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:09.879 00:20:25 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:09.879 00:20:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:09.879 00:20:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.879 00:20:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:09.879 00:20:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:09.879 00:20:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:09.879 00:20:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.879 00:20:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.880 00:20:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.880 00:20:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:09.880 00:20:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:09.880 00:20:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:09.880 00:20:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:09.880 00:20:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:09.880 00:20:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:09.880 00:20:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.880 00:20:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.880 00:20:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.880 00:20:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:09.880 00:20:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.880 00:20:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.880 00:20:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.880 00:20:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.880 00:20:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.880 00:20:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.880 00:20:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.880 00:20:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.880 00:20:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:09.880 00:20:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:09.880 Cannot find device "nvmf_tgt_br" 00:08:09.880 00:20:25 -- nvmf/common.sh@154 -- # true 00:08:09.880 00:20:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.880 Cannot find device "nvmf_tgt_br2" 00:08:09.880 00:20:25 -- nvmf/common.sh@155 -- # true 00:08:09.880 00:20:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:09.880 00:20:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:09.880 Cannot find device "nvmf_tgt_br" 00:08:09.880 00:20:25 -- nvmf/common.sh@157 -- # true 00:08:09.880 00:20:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:09.880 Cannot find device "nvmf_tgt_br2" 00:08:09.880 00:20:25 -- nvmf/common.sh@158 -- # true 00:08:09.880 00:20:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:09.880 00:20:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:10.139 00:20:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:10.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.139 00:20:25 -- nvmf/common.sh@161 -- # true 00:08:10.139 00:20:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:10.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:10.139 00:20:25 -- nvmf/common.sh@162 -- # true 00:08:10.139 00:20:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:10.139 00:20:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:10.139 00:20:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:10.139 00:20:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:10.139 00:20:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:10.139 00:20:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:10.139 00:20:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:10.139 00:20:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:10.139 00:20:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:10.139 00:20:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:10.139 00:20:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:10.139 00:20:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:10.139 00:20:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:10.139 00:20:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:10.139 00:20:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:10.139 00:20:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:10.139 00:20:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:10.139 00:20:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:10.139 00:20:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:10.140 00:20:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:10.140 00:20:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:10.140 00:20:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:10.140 00:20:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:10.140 00:20:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:10.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:10.140 00:08:10.140 --- 10.0.0.2 ping statistics --- 00:08:10.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.140 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:10.140 00:20:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:10.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:10.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:08:10.140 00:08:10.140 --- 10.0.0.3 ping statistics --- 00:08:10.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.140 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:10.140 00:20:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:10.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:10.140 00:08:10.140 --- 10.0.0.1 ping statistics --- 00:08:10.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.140 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:10.140 00:20:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.140 00:20:25 -- nvmf/common.sh@421 -- # return 0 00:08:10.140 00:20:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:10.140 00:20:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.140 00:20:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:10.140 00:20:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:10.140 00:20:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.140 00:20:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:10.140 00:20:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:10.140 00:20:25 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:10.140 00:20:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:10.140 00:20:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:10.140 00:20:25 -- common/autotest_common.sh@10 -- # set +x 00:08:10.140 00:20:25 -- nvmf/common.sh@469 -- # nvmfpid=60560 00:08:10.140 00:20:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:10.140 00:20:25 -- nvmf/common.sh@470 -- # waitforlisten 60560 00:08:10.140 00:20:25 -- common/autotest_common.sh@819 -- # '[' -z 60560 ']' 00:08:10.140 00:20:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.140 00:20:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.140 00:20:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.140 00:20:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.140 00:20:25 -- common/autotest_common.sh@10 -- # set +x 00:08:10.398 [2024-09-29 00:20:26.005123] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:10.398 [2024-09-29 00:20:26.005233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.398 [2024-09-29 00:20:26.143118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.398 [2024-09-29 00:20:26.210999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.398 [2024-09-29 00:20:26.211163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.398 [2024-09-29 00:20:26.211179] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.398 [2024-09-29 00:20:26.211189] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.398 [2024-09-29 00:20:26.211218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.334 00:20:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.334 00:20:26 -- common/autotest_common.sh@852 -- # return 0 00:08:11.334 00:20:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.334 00:20:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:11.334 00:20:26 -- common/autotest_common.sh@10 -- # set +x 00:08:11.334 00:20:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.334 00:20:26 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:11.593 [2024-09-29 00:20:27.184879] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:11.593 00:20:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.593 00:20:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.593 00:20:27 -- common/autotest_common.sh@10 -- # set +x 00:08:11.593 ************************************ 00:08:11.593 START TEST lvs_grow_clean 00:08:11.593 ************************************ 00:08:11.593 00:20:27 -- common/autotest_common.sh@1104 -- # lvs_grow 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.593 00:20:27 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.851 00:20:27 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.851 00:20:27 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:12.110 00:20:27 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:12.110 00:20:27 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:12.110 00:20:27 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:12.368 00:20:28 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:12.368 00:20:28 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:12.368 00:20:28 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e3f9d400-588d-49c2-9d1d-8091e86a204d lvol 150 00:08:12.627 00:20:28 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4a20a7c2-e654-4168-98ab-078e764bf1c3 00:08:12.627 00:20:28 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:12.627 00:20:28 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.886 [2024-09-29 00:20:28.525137] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.886 [2024-09-29 00:20:28.525407] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.886 true 00:08:12.886 00:20:28 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:12.886 00:20:28 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:13.145 00:20:28 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:13.145 00:20:28 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:13.145 00:20:28 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a20a7c2-e654-4168-98ab-078e764bf1c3 00:08:13.403 00:20:29 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:13.662 [2024-09-29 00:20:29.477806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.662 00:20:29 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.921 00:20:29 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60637 00:08:13.921 00:20:29 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:13.921 00:20:29 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.921 00:20:29 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60637 /var/tmp/bdevperf.sock 00:08:13.921 00:20:29 -- common/autotest_common.sh@819 -- # '[' -z 60637 ']' 00:08:13.921 00:20:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.921 00:20:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:13.921 00:20:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.921 00:20:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:13.921 00:20:29 -- common/autotest_common.sh@10 -- # set +x 00:08:14.180 [2024-09-29 00:20:29.802571] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:14.180 [2024-09-29 00:20:29.802860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60637 ] 00:08:14.180 [2024-09-29 00:20:29.939634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.180 [2024-09-29 00:20:30.010984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.118 00:20:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:15.118 00:20:30 -- common/autotest_common.sh@852 -- # return 0 00:08:15.118 00:20:30 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:15.377 Nvme0n1 00:08:15.377 00:20:31 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:15.637 [ 00:08:15.637 { 00:08:15.637 "name": "Nvme0n1", 00:08:15.637 "aliases": [ 00:08:15.637 "4a20a7c2-e654-4168-98ab-078e764bf1c3" 00:08:15.637 ], 00:08:15.637 "product_name": "NVMe disk", 00:08:15.637 "block_size": 4096, 00:08:15.637 "num_blocks": 38912, 00:08:15.637 "uuid": "4a20a7c2-e654-4168-98ab-078e764bf1c3", 00:08:15.637 "assigned_rate_limits": { 00:08:15.637 "rw_ios_per_sec": 0, 00:08:15.637 "rw_mbytes_per_sec": 0, 00:08:15.637 "r_mbytes_per_sec": 0, 00:08:15.637 "w_mbytes_per_sec": 0 00:08:15.637 }, 00:08:15.637 "claimed": false, 00:08:15.637 "zoned": false, 00:08:15.637 "supported_io_types": { 00:08:15.637 "read": true, 00:08:15.637 "write": true, 00:08:15.637 "unmap": true, 00:08:15.637 "write_zeroes": true, 00:08:15.637 "flush": true, 00:08:15.637 "reset": true, 00:08:15.637 "compare": true, 00:08:15.637 "compare_and_write": true, 00:08:15.637 "abort": true, 00:08:15.637 "nvme_admin": true, 00:08:15.637 "nvme_io": true 00:08:15.637 }, 00:08:15.637 "driver_specific": { 00:08:15.637 "nvme": [ 00:08:15.637 { 00:08:15.637 "trid": { 00:08:15.637 "trtype": "TCP", 00:08:15.637 "adrfam": "IPv4", 00:08:15.637 "traddr": "10.0.0.2", 00:08:15.637 "trsvcid": "4420", 00:08:15.637 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:15.637 }, 00:08:15.637 "ctrlr_data": { 00:08:15.637 "cntlid": 1, 00:08:15.637 "vendor_id": "0x8086", 00:08:15.637 "model_number": "SPDK bdev Controller", 00:08:15.637 "serial_number": "SPDK0", 00:08:15.637 "firmware_revision": "24.01.1", 00:08:15.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.637 "oacs": { 00:08:15.637 "security": 0, 00:08:15.637 "format": 0, 00:08:15.637 "firmware": 0, 00:08:15.637 "ns_manage": 0 00:08:15.637 }, 00:08:15.637 "multi_ctrlr": true, 00:08:15.637 "ana_reporting": false 00:08:15.637 }, 00:08:15.637 "vs": { 00:08:15.637 "nvme_version": "1.3" 00:08:15.637 }, 00:08:15.637 "ns_data": { 00:08:15.637 "id": 1, 00:08:15.637 "can_share": true 00:08:15.637 } 00:08:15.637 } 00:08:15.637 ], 00:08:15.637 "mp_policy": "active_passive" 00:08:15.637 } 00:08:15.637 } 00:08:15.637 ] 00:08:15.637 00:20:31 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60666 00:08:15.637 00:20:31 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:15.637 00:20:31 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:15.637 Running I/O for 10 seconds... 00:08:17.014 Latency(us) 00:08:17.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.014 Nvme0n1 : 1.00 6446.00 25.18 0.00 0.00 0.00 0.00 0.00 00:08:17.014 =================================================================================================================== 00:08:17.014 Total : 6446.00 25.18 0.00 0.00 0.00 0.00 0.00 00:08:17.014 00:08:17.581 00:20:33 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:17.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.839 Nvme0n1 : 2.00 6523.00 25.48 0.00 0.00 0.00 0.00 0.00 00:08:17.839 =================================================================================================================== 00:08:17.839 Total : 6523.00 25.48 0.00 0.00 0.00 0.00 0.00 00:08:17.839 00:08:17.839 true 00:08:17.839 00:20:33 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:17.839 00:20:33 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:18.405 00:20:33 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:18.405 00:20:33 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:18.405 00:20:33 -- target/nvmf_lvs_grow.sh@65 -- # wait 60666 00:08:18.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.664 Nvme0n1 : 3.00 6592.33 25.75 0.00 0.00 0.00 0.00 0.00 00:08:18.664 =================================================================================================================== 00:08:18.664 Total : 6592.33 25.75 0.00 0.00 0.00 0.00 0.00 00:08:18.664 00:08:20.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.040 Nvme0n1 : 4.00 6658.75 26.01 0.00 0.00 0.00 0.00 0.00 00:08:20.040 =================================================================================================================== 00:08:20.040 Total : 6658.75 26.01 0.00 0.00 0.00 0.00 0.00 00:08:20.040 00:08:20.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.977 Nvme0n1 : 5.00 6597.00 25.77 0.00 0.00 0.00 0.00 0.00 00:08:20.977 =================================================================================================================== 00:08:20.977 Total : 6597.00 25.77 0.00 0.00 0.00 0.00 0.00 00:08:20.977 00:08:21.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.913 Nvme0n1 : 6.00 6598.17 25.77 0.00 0.00 0.00 0.00 0.00 00:08:21.913 =================================================================================================================== 00:08:21.913 Total : 6598.17 25.77 0.00 0.00 0.00 0.00 0.00 00:08:21.913 00:08:22.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.848 Nvme0n1 : 7.00 6599.00 25.78 0.00 0.00 0.00 0.00 0.00 00:08:22.848 =================================================================================================================== 00:08:22.848 Total : 6599.00 25.78 0.00 0.00 0.00 0.00 0.00 00:08:22.848 00:08:23.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.781 Nvme0n1 : 8.00 6615.50 25.84 0.00 0.00 0.00 0.00 0.00 00:08:23.781 =================================================================================================================== 00:08:23.781 Total : 6615.50 25.84 0.00 0.00 0.00 0.00 0.00 00:08:23.781 00:08:24.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.717 Nvme0n1 : 9.00 6600.11 25.78 0.00 0.00 0.00 0.00 0.00 00:08:24.717 =================================================================================================================== 00:08:24.717 Total : 6600.11 25.78 0.00 0.00 0.00 0.00 0.00 00:08:24.717 00:08:25.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.663 Nvme0n1 : 10.00 6587.80 25.73 0.00 0.00 0.00 0.00 0.00 00:08:25.663 =================================================================================================================== 00:08:25.663 Total : 6587.80 25.73 0.00 0.00 0.00 0.00 0.00 00:08:25.663 00:08:25.663 00:08:25.663 Latency(us) 00:08:25.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.663 Nvme0n1 : 10.01 6591.16 25.75 0.00 0.00 19414.66 5153.51 77213.32 00:08:25.663 =================================================================================================================== 00:08:25.663 Total : 6591.16 25.75 0.00 0.00 19414.66 5153.51 77213.32 00:08:25.663 0 00:08:25.942 00:20:41 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60637 00:08:25.942 00:20:41 -- common/autotest_common.sh@926 -- # '[' -z 60637 ']' 00:08:25.942 00:20:41 -- common/autotest_common.sh@930 -- # kill -0 60637 00:08:25.942 00:20:41 -- common/autotest_common.sh@931 -- # uname 00:08:25.942 00:20:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:25.942 00:20:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60637 00:08:25.942 killing process with pid 60637 00:08:25.942 Received shutdown signal, test time was about 10.000000 seconds 00:08:25.942 00:08:25.942 Latency(us) 00:08:25.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.942 =================================================================================================================== 00:08:25.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:25.942 00:20:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:08:25.942 00:20:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:08:25.942 00:20:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60637' 00:08:25.942 00:20:41 -- common/autotest_common.sh@945 -- # kill 60637 00:08:25.942 00:20:41 -- common/autotest_common.sh@950 -- # wait 60637 00:08:25.942 00:20:41 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.202 00:20:42 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:26.202 00:20:42 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:26.769 00:20:42 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:26.769 00:20:42 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:26.769 00:20:42 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.769 [2024-09-29 00:20:42.574983] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.769 00:20:42 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:26.769 00:20:42 -- common/autotest_common.sh@640 -- # local es=0 00:08:26.769 00:20:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:26.769 00:20:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:26.769 00:20:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:26.769 00:20:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.028 00:20:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:27.028 00:20:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.028 00:20:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:27.028 00:20:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:27.028 00:20:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:27.028 00:20:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:27.287 request: 00:08:27.287 { 00:08:27.287 "uuid": "e3f9d400-588d-49c2-9d1d-8091e86a204d", 00:08:27.287 "method": "bdev_lvol_get_lvstores", 00:08:27.287 "req_id": 1 00:08:27.287 } 00:08:27.287 Got JSON-RPC error response 00:08:27.287 response: 00:08:27.287 { 00:08:27.287 "code": -19, 00:08:27.287 "message": "No such device" 00:08:27.287 } 00:08:27.287 00:20:42 -- common/autotest_common.sh@643 -- # es=1 00:08:27.287 00:20:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:27.287 00:20:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:27.287 00:20:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:27.287 00:20:42 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.287 aio_bdev 00:08:27.287 00:20:43 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4a20a7c2-e654-4168-98ab-078e764bf1c3 00:08:27.287 00:20:43 -- common/autotest_common.sh@887 -- # local bdev_name=4a20a7c2-e654-4168-98ab-078e764bf1c3 00:08:27.287 00:20:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:27.287 00:20:43 -- common/autotest_common.sh@889 -- # local i 00:08:27.287 00:20:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:27.287 00:20:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:27.287 00:20:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.546 00:20:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a20a7c2-e654-4168-98ab-078e764bf1c3 -t 2000 00:08:27.804 [ 00:08:27.804 { 00:08:27.804 "name": "4a20a7c2-e654-4168-98ab-078e764bf1c3", 00:08:27.804 "aliases": [ 00:08:27.804 "lvs/lvol" 00:08:27.804 ], 00:08:27.804 "product_name": "Logical Volume", 00:08:27.804 "block_size": 4096, 00:08:27.804 "num_blocks": 38912, 00:08:27.804 "uuid": "4a20a7c2-e654-4168-98ab-078e764bf1c3", 00:08:27.804 "assigned_rate_limits": { 00:08:27.804 "rw_ios_per_sec": 0, 00:08:27.804 "rw_mbytes_per_sec": 0, 00:08:27.804 "r_mbytes_per_sec": 0, 00:08:27.804 "w_mbytes_per_sec": 0 00:08:27.804 }, 00:08:27.804 "claimed": false, 00:08:27.804 "zoned": false, 00:08:27.804 "supported_io_types": { 00:08:27.804 "read": true, 00:08:27.804 "write": true, 00:08:27.804 "unmap": true, 00:08:27.804 "write_zeroes": true, 00:08:27.804 "flush": false, 00:08:27.804 "reset": true, 00:08:27.804 "compare": false, 00:08:27.804 "compare_and_write": false, 00:08:27.804 "abort": false, 00:08:27.804 "nvme_admin": false, 00:08:27.804 "nvme_io": false 00:08:27.804 }, 00:08:27.804 "driver_specific": { 00:08:27.804 "lvol": { 00:08:27.804 "lvol_store_uuid": "e3f9d400-588d-49c2-9d1d-8091e86a204d", 00:08:27.804 "base_bdev": "aio_bdev", 00:08:27.804 "thin_provision": false, 00:08:27.804 "snapshot": false, 00:08:27.804 "clone": false, 00:08:27.804 "esnap_clone": false 00:08:27.804 } 00:08:27.804 } 00:08:27.804 } 00:08:27.804 ] 00:08:27.805 00:20:43 -- common/autotest_common.sh@895 -- # return 0 00:08:27.805 00:20:43 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:27.805 00:20:43 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:28.063 00:20:43 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:28.063 00:20:43 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:28.063 00:20:43 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:28.321 00:20:44 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:28.321 00:20:44 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a20a7c2-e654-4168-98ab-078e764bf1c3 00:08:28.579 00:20:44 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3f9d400-588d-49c2-9d1d-8091e86a204d 00:08:28.836 00:20:44 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.095 00:20:44 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.661 ************************************ 00:08:29.661 END TEST lvs_grow_clean 00:08:29.661 ************************************ 00:08:29.661 00:08:29.661 real 0m18.029s 00:08:29.661 user 0m17.258s 00:08:29.661 sys 0m2.278s 00:08:29.661 00:20:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.661 00:20:45 -- common/autotest_common.sh@10 -- # set +x 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:29.661 00:20:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:29.661 00:20:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.661 00:20:45 -- common/autotest_common.sh@10 -- # set +x 00:08:29.661 ************************************ 00:08:29.661 START TEST lvs_grow_dirty 00:08:29.661 ************************************ 00:08:29.661 00:20:45 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.661 00:20:45 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.920 00:20:45 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.920 00:20:45 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:30.178 00:20:45 -- target/nvmf_lvs_grow.sh@28 -- # lvs=aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:30.178 00:20:45 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.178 00:20:45 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:30.437 00:20:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.437 00:20:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.437 00:20:46 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aef3167c-4596-47bf-8139-c07358ef0c0f lvol 150 00:08:30.695 00:20:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:30.696 00:20:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:30.696 00:20:46 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.696 [2024-09-29 00:20:46.538226] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.696 [2024-09-29 00:20:46.538321] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.954 true 00:08:30.954 00:20:46 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:30.954 00:20:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.954 00:20:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.954 00:20:46 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.212 00:20:47 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:31.470 00:20:47 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.728 00:20:47 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.987 00:20:47 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.987 00:20:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60907 00:08:31.987 00:20:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.987 00:20:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60907 /var/tmp/bdevperf.sock 00:08:31.987 00:20:47 -- common/autotest_common.sh@819 -- # '[' -z 60907 ']' 00:08:31.987 00:20:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.987 00:20:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.987 00:20:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.987 00:20:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.987 00:20:47 -- common/autotest_common.sh@10 -- # set +x 00:08:31.987 [2024-09-29 00:20:47.773781] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:31.987 [2024-09-29 00:20:47.774048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60907 ] 00:08:32.245 [2024-09-29 00:20:47.904671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.245 [2024-09-29 00:20:47.981136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.179 00:20:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.179 00:20:48 -- common/autotest_common.sh@852 -- # return 0 00:08:33.179 00:20:48 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.179 Nvme0n1 00:08:33.437 00:20:49 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:33.437 [ 00:08:33.437 { 00:08:33.437 "name": "Nvme0n1", 00:08:33.437 "aliases": [ 00:08:33.437 "2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a" 00:08:33.437 ], 00:08:33.437 "product_name": "NVMe disk", 00:08:33.437 "block_size": 4096, 00:08:33.438 "num_blocks": 38912, 00:08:33.438 "uuid": "2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a", 00:08:33.438 "assigned_rate_limits": { 00:08:33.438 "rw_ios_per_sec": 0, 00:08:33.438 "rw_mbytes_per_sec": 0, 00:08:33.438 "r_mbytes_per_sec": 0, 00:08:33.438 "w_mbytes_per_sec": 0 00:08:33.438 }, 00:08:33.438 "claimed": false, 00:08:33.438 "zoned": false, 00:08:33.438 "supported_io_types": { 00:08:33.438 "read": true, 00:08:33.438 "write": true, 00:08:33.438 "unmap": true, 00:08:33.438 "write_zeroes": true, 00:08:33.438 "flush": true, 00:08:33.438 "reset": true, 00:08:33.438 "compare": true, 00:08:33.438 "compare_and_write": true, 00:08:33.438 "abort": true, 00:08:33.438 "nvme_admin": true, 00:08:33.438 "nvme_io": true 00:08:33.438 }, 00:08:33.438 "driver_specific": { 00:08:33.438 "nvme": [ 00:08:33.438 { 00:08:33.438 "trid": { 00:08:33.438 "trtype": "TCP", 00:08:33.438 "adrfam": "IPv4", 00:08:33.438 "traddr": "10.0.0.2", 00:08:33.438 "trsvcid": "4420", 00:08:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:33.438 }, 00:08:33.438 "ctrlr_data": { 00:08:33.438 "cntlid": 1, 00:08:33.438 "vendor_id": "0x8086", 00:08:33.438 "model_number": "SPDK bdev Controller", 00:08:33.438 "serial_number": "SPDK0", 00:08:33.438 "firmware_revision": "24.01.1", 00:08:33.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.438 "oacs": { 00:08:33.438 "security": 0, 00:08:33.438 "format": 0, 00:08:33.438 "firmware": 0, 00:08:33.438 "ns_manage": 0 00:08:33.438 }, 00:08:33.438 "multi_ctrlr": true, 00:08:33.438 "ana_reporting": false 00:08:33.438 }, 00:08:33.438 "vs": { 00:08:33.438 "nvme_version": "1.3" 00:08:33.438 }, 00:08:33.438 "ns_data": { 00:08:33.438 "id": 1, 00:08:33.438 "can_share": true 00:08:33.438 } 00:08:33.438 } 00:08:33.438 ], 00:08:33.438 "mp_policy": "active_passive" 00:08:33.438 } 00:08:33.438 } 00:08:33.438 ] 00:08:33.438 00:20:49 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.438 00:20:49 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60930 00:08:33.438 00:20:49 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:33.696 Running I/O for 10 seconds... 00:08:34.630 Latency(us) 00:08:34.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.630 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:34.630 =================================================================================================================== 00:08:34.630 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:34.630 00:08:35.564 00:20:51 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:35.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.564 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:35.564 =================================================================================================================== 00:08:35.564 Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:35.564 00:08:35.823 true 00:08:35.823 00:20:51 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:35.823 00:20:51 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.118 00:20:51 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.118 00:20:51 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.118 00:20:51 -- target/nvmf_lvs_grow.sh@65 -- # wait 60930 00:08:36.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.693 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:36.693 =================================================================================================================== 00:08:36.693 Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:36.693 00:08:37.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.630 Nvme0n1 : 4.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:37.630 =================================================================================================================== 00:08:37.630 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:37.630 00:08:38.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.565 Nvme0n1 : 5.00 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:08:38.565 =================================================================================================================== 00:08:38.565 Total : 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:08:38.565 00:08:39.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.940 Nvme0n1 : 6.00 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:08:39.940 =================================================================================================================== 00:08:39.940 Total : 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:08:39.940 00:08:40.893 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.893 Nvme0n1 : 7.00 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:08:40.893 =================================================================================================================== 00:08:40.893 Total : 6622.14 25.87 0.00 0.00 0.00 0.00 0.00 00:08:40.893 00:08:41.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.828 Nvme0n1 : 8.00 6492.25 25.36 0.00 0.00 0.00 0.00 0.00 00:08:41.828 =================================================================================================================== 00:08:41.828 Total : 6492.25 25.36 0.00 0.00 0.00 0.00 0.00 00:08:41.828 00:08:42.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.766 Nvme0n1 : 9.00 6476.44 25.30 0.00 0.00 0.00 0.00 0.00 00:08:42.766 =================================================================================================================== 00:08:42.766 Total : 6476.44 25.30 0.00 0.00 0.00 0.00 0.00 00:08:42.766 00:08:43.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.702 Nvme0n1 : 10.00 6463.80 25.25 0.00 0.00 0.00 0.00 0.00 00:08:43.702 =================================================================================================================== 00:08:43.702 Total : 6463.80 25.25 0.00 0.00 0.00 0.00 0.00 00:08:43.702 00:08:43.702 00:08:43.702 Latency(us) 00:08:43.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.702 Nvme0n1 : 10.01 6471.04 25.28 0.00 0.00 19776.35 9532.51 156333.15 00:08:43.702 =================================================================================================================== 00:08:43.702 Total : 6471.04 25.28 0.00 0.00 19776.35 9532.51 156333.15 00:08:43.702 0 00:08:43.702 00:20:59 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60907 00:08:43.702 00:20:59 -- common/autotest_common.sh@926 -- # '[' -z 60907 ']' 00:08:43.702 00:20:59 -- common/autotest_common.sh@930 -- # kill -0 60907 00:08:43.702 00:20:59 -- common/autotest_common.sh@931 -- # uname 00:08:43.702 00:20:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:43.702 00:20:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 60907 00:08:43.702 killing process with pid 60907 00:08:43.702 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.702 00:08:43.702 Latency(us) 00:08:43.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.702 =================================================================================================================== 00:08:43.702 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.702 00:20:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:08:43.702 00:20:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:08:43.702 00:20:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 60907' 00:08:43.702 00:20:59 -- common/autotest_common.sh@945 -- # kill 60907 00:08:43.702 00:20:59 -- common/autotest_common.sh@950 -- # wait 60907 00:08:43.960 00:20:59 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:44.219 00:20:59 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:44.219 00:20:59 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:44.478 00:21:00 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:44.478 00:21:00 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.478 00:21:00 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60560 00:08:44.478 00:21:00 -- target/nvmf_lvs_grow.sh@74 -- # wait 60560 00:08:44.478 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60560 Killed "${NVMF_APP[@]}" "$@" 00:08:44.478 00:21:00 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:44.478 00:21:00 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:44.478 00:21:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:44.478 00:21:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:44.478 00:21:00 -- common/autotest_common.sh@10 -- # set +x 00:08:44.478 00:21:00 -- nvmf/common.sh@469 -- # nvmfpid=61062 00:08:44.478 00:21:00 -- nvmf/common.sh@470 -- # waitforlisten 61062 00:08:44.478 00:21:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.478 00:21:00 -- common/autotest_common.sh@819 -- # '[' -z 61062 ']' 00:08:44.478 00:21:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.478 00:21:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:44.478 00:21:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.478 00:21:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:44.478 00:21:00 -- common/autotest_common.sh@10 -- # set +x 00:08:44.478 [2024-09-29 00:21:00.290581] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:44.478 [2024-09-29 00:21:00.290668] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.737 [2024-09-29 00:21:00.425438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.737 [2024-09-29 00:21:00.480485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.737 [2024-09-29 00:21:00.480909] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.737 [2024-09-29 00:21:00.480930] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.737 [2024-09-29 00:21:00.480939] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.737 [2024-09-29 00:21:00.480983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.672 00:21:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.672 00:21:01 -- common/autotest_common.sh@852 -- # return 0 00:08:45.672 00:21:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:45.672 00:21:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:45.672 00:21:01 -- common/autotest_common.sh@10 -- # set +x 00:08:45.672 00:21:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.672 00:21:01 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.932 [2024-09-29 00:21:01.581424] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.932 [2024-09-29 00:21:01.581871] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.932 [2024-09-29 00:21:01.582195] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.932 00:21:01 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:45.932 00:21:01 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:45.932 00:21:01 -- common/autotest_common.sh@887 -- # local bdev_name=2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:45.932 00:21:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:45.932 00:21:01 -- common/autotest_common.sh@889 -- # local i 00:08:45.932 00:21:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:45.932 00:21:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:45.932 00:21:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.191 00:21:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a -t 2000 00:08:46.449 [ 00:08:46.449 { 00:08:46.449 "name": "2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a", 00:08:46.449 "aliases": [ 00:08:46.449 "lvs/lvol" 00:08:46.449 ], 00:08:46.449 "product_name": "Logical Volume", 00:08:46.449 "block_size": 4096, 00:08:46.449 "num_blocks": 38912, 00:08:46.450 "uuid": "2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a", 00:08:46.450 "assigned_rate_limits": { 00:08:46.450 "rw_ios_per_sec": 0, 00:08:46.450 "rw_mbytes_per_sec": 0, 00:08:46.450 "r_mbytes_per_sec": 0, 00:08:46.450 "w_mbytes_per_sec": 0 00:08:46.450 }, 00:08:46.450 "claimed": false, 00:08:46.450 "zoned": false, 00:08:46.450 "supported_io_types": { 00:08:46.450 "read": true, 00:08:46.450 "write": true, 00:08:46.450 "unmap": true, 00:08:46.450 "write_zeroes": true, 00:08:46.450 "flush": false, 00:08:46.450 "reset": true, 00:08:46.450 "compare": false, 00:08:46.450 "compare_and_write": false, 00:08:46.450 "abort": false, 00:08:46.450 "nvme_admin": false, 00:08:46.450 "nvme_io": false 00:08:46.450 }, 00:08:46.450 "driver_specific": { 00:08:46.450 "lvol": { 00:08:46.450 "lvol_store_uuid": "aef3167c-4596-47bf-8139-c07358ef0c0f", 00:08:46.450 "base_bdev": "aio_bdev", 00:08:46.450 "thin_provision": false, 00:08:46.450 "snapshot": false, 00:08:46.450 "clone": false, 00:08:46.450 "esnap_clone": false 00:08:46.450 } 00:08:46.450 } 00:08:46.450 } 00:08:46.450 ] 00:08:46.450 00:21:02 -- common/autotest_common.sh@895 -- # return 0 00:08:46.450 00:21:02 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:46.450 00:21:02 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:46.708 00:21:02 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:46.708 00:21:02 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:46.709 00:21:02 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:46.967 00:21:02 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:46.967 00:21:02 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.967 [2024-09-29 00:21:02.815169] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:47.226 00:21:02 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:47.226 00:21:02 -- common/autotest_common.sh@640 -- # local es=0 00:08:47.226 00:21:02 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:47.226 00:21:02 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.226 00:21:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:47.226 00:21:02 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.226 00:21:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:47.226 00:21:02 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.226 00:21:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:47.226 00:21:02 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.226 00:21:02 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:47.226 00:21:02 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:47.497 request: 00:08:47.497 { 00:08:47.497 "uuid": "aef3167c-4596-47bf-8139-c07358ef0c0f", 00:08:47.497 "method": "bdev_lvol_get_lvstores", 00:08:47.497 "req_id": 1 00:08:47.497 } 00:08:47.497 Got JSON-RPC error response 00:08:47.497 response: 00:08:47.497 { 00:08:47.497 "code": -19, 00:08:47.497 "message": "No such device" 00:08:47.497 } 00:08:47.497 00:21:03 -- common/autotest_common.sh@643 -- # es=1 00:08:47.497 00:21:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:47.497 00:21:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:47.497 00:21:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:47.497 00:21:03 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.778 aio_bdev 00:08:47.778 00:21:03 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:47.778 00:21:03 -- common/autotest_common.sh@887 -- # local bdev_name=2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:47.778 00:21:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:47.778 00:21:03 -- common/autotest_common.sh@889 -- # local i 00:08:47.778 00:21:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:47.778 00:21:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:47.778 00:21:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.778 00:21:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a -t 2000 00:08:48.037 [ 00:08:48.037 { 00:08:48.037 "name": "2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a", 00:08:48.037 "aliases": [ 00:08:48.037 "lvs/lvol" 00:08:48.037 ], 00:08:48.037 "product_name": "Logical Volume", 00:08:48.037 "block_size": 4096, 00:08:48.037 "num_blocks": 38912, 00:08:48.037 "uuid": "2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a", 00:08:48.037 "assigned_rate_limits": { 00:08:48.037 "rw_ios_per_sec": 0, 00:08:48.037 "rw_mbytes_per_sec": 0, 00:08:48.037 "r_mbytes_per_sec": 0, 00:08:48.037 "w_mbytes_per_sec": 0 00:08:48.037 }, 00:08:48.037 "claimed": false, 00:08:48.037 "zoned": false, 00:08:48.037 "supported_io_types": { 00:08:48.037 "read": true, 00:08:48.037 "write": true, 00:08:48.037 "unmap": true, 00:08:48.037 "write_zeroes": true, 00:08:48.037 "flush": false, 00:08:48.037 "reset": true, 00:08:48.037 "compare": false, 00:08:48.037 "compare_and_write": false, 00:08:48.037 "abort": false, 00:08:48.037 "nvme_admin": false, 00:08:48.037 "nvme_io": false 00:08:48.037 }, 00:08:48.037 "driver_specific": { 00:08:48.037 "lvol": { 00:08:48.037 "lvol_store_uuid": "aef3167c-4596-47bf-8139-c07358ef0c0f", 00:08:48.037 "base_bdev": "aio_bdev", 00:08:48.037 "thin_provision": false, 00:08:48.037 "snapshot": false, 00:08:48.037 "clone": false, 00:08:48.037 "esnap_clone": false 00:08:48.037 } 00:08:48.037 } 00:08:48.037 } 00:08:48.037 ] 00:08:48.037 00:21:03 -- common/autotest_common.sh@895 -- # return 0 00:08:48.037 00:21:03 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:48.037 00:21:03 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:48.296 00:21:04 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:48.296 00:21:04 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:48.296 00:21:04 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:48.554 00:21:04 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:48.554 00:21:04 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2364ddf6-f31a-4fd0-b8aa-2aa3406ac23a 00:08:48.813 00:21:04 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aef3167c-4596-47bf-8139-c07358ef0c0f 00:08:49.072 00:21:04 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.331 00:21:05 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.898 ************************************ 00:08:49.898 END TEST lvs_grow_dirty 00:08:49.898 ************************************ 00:08:49.898 00:08:49.898 real 0m20.176s 00:08:49.898 user 0m40.548s 00:08:49.898 sys 0m9.205s 00:08:49.898 00:21:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.898 00:21:05 -- common/autotest_common.sh@10 -- # set +x 00:08:49.898 00:21:05 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.898 00:21:05 -- common/autotest_common.sh@796 -- # type=--id 00:08:49.898 00:21:05 -- common/autotest_common.sh@797 -- # id=0 00:08:49.898 00:21:05 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:08:49.898 00:21:05 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.898 00:21:05 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:08:49.898 00:21:05 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:08:49.898 00:21:05 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:08:49.898 00:21:05 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.898 nvmf_trace.0 00:08:49.898 00:21:05 -- common/autotest_common.sh@811 -- # return 0 00:08:49.898 00:21:05 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.898 00:21:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:49.898 00:21:05 -- nvmf/common.sh@116 -- # sync 00:08:50.465 00:21:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:50.465 00:21:06 -- nvmf/common.sh@119 -- # set +e 00:08:50.465 00:21:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:50.465 00:21:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:50.465 rmmod nvme_tcp 00:08:50.465 rmmod nvme_fabrics 00:08:50.465 rmmod nvme_keyring 00:08:50.465 00:21:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:50.465 00:21:06 -- nvmf/common.sh@123 -- # set -e 00:08:50.465 00:21:06 -- nvmf/common.sh@124 -- # return 0 00:08:50.465 00:21:06 -- nvmf/common.sh@477 -- # '[' -n 61062 ']' 00:08:50.465 00:21:06 -- nvmf/common.sh@478 -- # killprocess 61062 00:08:50.465 00:21:06 -- common/autotest_common.sh@926 -- # '[' -z 61062 ']' 00:08:50.465 00:21:06 -- common/autotest_common.sh@930 -- # kill -0 61062 00:08:50.465 00:21:06 -- common/autotest_common.sh@931 -- # uname 00:08:50.465 00:21:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:50.465 00:21:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61062 00:08:50.465 killing process with pid 61062 00:08:50.465 00:21:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:50.465 00:21:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:50.465 00:21:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61062' 00:08:50.465 00:21:06 -- common/autotest_common.sh@945 -- # kill 61062 00:08:50.465 00:21:06 -- common/autotest_common.sh@950 -- # wait 61062 00:08:50.724 00:21:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:50.724 00:21:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:50.724 00:21:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:50.724 00:21:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.724 00:21:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:50.724 00:21:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.724 00:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.724 00:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.724 00:21:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:50.724 ************************************ 00:08:50.724 END TEST nvmf_lvs_grow 00:08:50.724 ************************************ 00:08:50.725 00:08:50.725 real 0m40.861s 00:08:50.725 user 1m4.468s 00:08:50.725 sys 0m12.403s 00:08:50.725 00:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.725 00:21:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.725 00:21:06 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.725 00:21:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.725 00:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.725 00:21:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.725 ************************************ 00:08:50.725 START TEST nvmf_bdev_io_wait 00:08:50.725 ************************************ 00:08:50.725 00:21:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.725 * Looking for test storage... 00:08:50.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.725 00:21:06 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.725 00:21:06 -- nvmf/common.sh@7 -- # uname -s 00:08:50.725 00:21:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.725 00:21:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.725 00:21:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.725 00:21:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.725 00:21:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.725 00:21:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.725 00:21:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.725 00:21:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.725 00:21:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.725 00:21:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.725 00:21:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:08:50.725 00:21:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:08:50.725 00:21:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.725 00:21:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.725 00:21:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.725 00:21:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.725 00:21:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.725 00:21:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.725 00:21:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.725 00:21:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.725 00:21:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.725 00:21:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.725 00:21:06 -- paths/export.sh@5 -- # export PATH 00:08:50.725 00:21:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.725 00:21:06 -- nvmf/common.sh@46 -- # : 0 00:08:50.725 00:21:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.725 00:21:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.725 00:21:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.725 00:21:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.725 00:21:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.725 00:21:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.725 00:21:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.725 00:21:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.725 00:21:06 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.725 00:21:06 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.725 00:21:06 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.725 00:21:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.725 00:21:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.725 00:21:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.725 00:21:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.725 00:21:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.725 00:21:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.725 00:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.725 00:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.725 00:21:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:50.725 00:21:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:50.725 00:21:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:50.725 00:21:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:50.725 00:21:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:50.725 00:21:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:50.725 00:21:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.725 00:21:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.725 00:21:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.725 00:21:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:50.725 00:21:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.725 00:21:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.725 00:21:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.725 00:21:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.725 00:21:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.725 00:21:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.725 00:21:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.725 00:21:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.725 00:21:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:50.725 00:21:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:50.725 Cannot find device "nvmf_tgt_br" 00:08:50.725 00:21:06 -- nvmf/common.sh@154 -- # true 00:08:50.725 00:21:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.725 Cannot find device "nvmf_tgt_br2" 00:08:50.725 00:21:06 -- nvmf/common.sh@155 -- # true 00:08:50.725 00:21:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:50.984 00:21:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:50.984 Cannot find device "nvmf_tgt_br" 00:08:50.984 00:21:06 -- nvmf/common.sh@157 -- # true 00:08:50.984 00:21:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:50.984 Cannot find device "nvmf_tgt_br2" 00:08:50.984 00:21:06 -- nvmf/common.sh@158 -- # true 00:08:50.984 00:21:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:50.984 00:21:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:50.984 00:21:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.984 00:21:06 -- nvmf/common.sh@161 -- # true 00:08:50.984 00:21:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.984 00:21:06 -- nvmf/common.sh@162 -- # true 00:08:50.984 00:21:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.984 00:21:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.984 00:21:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.984 00:21:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.984 00:21:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.984 00:21:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.984 00:21:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.984 00:21:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.984 00:21:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.984 00:21:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:50.984 00:21:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:50.984 00:21:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:50.984 00:21:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:50.984 00:21:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.984 00:21:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.984 00:21:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.984 00:21:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:50.984 00:21:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:50.984 00:21:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.984 00:21:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.984 00:21:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.984 00:21:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.984 00:21:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.984 00:21:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:50.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:08:50.984 00:08:50.984 --- 10.0.0.2 ping statistics --- 00:08:50.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.984 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:50.984 00:21:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:50.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:50.984 00:08:50.984 --- 10.0.0.3 ping statistics --- 00:08:50.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.984 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:50.984 00:21:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:51.244 00:08:51.244 --- 10.0.0.1 ping statistics --- 00:08:51.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.244 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:51.244 00:21:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.244 00:21:06 -- nvmf/common.sh@421 -- # return 0 00:08:51.244 00:21:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:51.244 00:21:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.244 00:21:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:51.244 00:21:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:51.244 00:21:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.244 00:21:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:51.244 00:21:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:51.244 00:21:06 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:51.244 00:21:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:51.244 00:21:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:51.244 00:21:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.244 00:21:06 -- nvmf/common.sh@469 -- # nvmfpid=61380 00:08:51.244 00:21:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:51.244 00:21:06 -- nvmf/common.sh@470 -- # waitforlisten 61380 00:08:51.244 00:21:06 -- common/autotest_common.sh@819 -- # '[' -z 61380 ']' 00:08:51.244 00:21:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.244 00:21:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:51.244 00:21:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.244 00:21:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:51.244 00:21:06 -- common/autotest_common.sh@10 -- # set +x 00:08:51.244 [2024-09-29 00:21:06.917962] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.244 [2024-09-29 00:21:06.918052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.244 [2024-09-29 00:21:07.057467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.503 [2024-09-29 00:21:07.110506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.503 [2024-09-29 00:21:07.110881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.503 [2024-09-29 00:21:07.110932] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.503 [2024-09-29 00:21:07.111159] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.503 [2024-09-29 00:21:07.111256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.503 [2024-09-29 00:21:07.111951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.503 [2024-09-29 00:21:07.112114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.503 [2024-09-29 00:21:07.112114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.503 00:21:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.503 00:21:07 -- common/autotest_common.sh@852 -- # return 0 00:08:51.503 00:21:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:51.503 00:21:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 00:21:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 [2024-09-29 00:21:07.257508] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 Malloc0 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.503 00:21:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.503 00:21:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.503 [2024-09-29 00:21:07.316306] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.503 00:21:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61402 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@30 -- # READ_PID=61404 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:51.503 00:21:07 -- nvmf/common.sh@520 -- # config=() 00:08:51.503 00:21:07 -- nvmf/common.sh@520 -- # local subsystem config 00:08:51.503 00:21:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:51.503 00:21:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:51.503 { 00:08:51.503 "params": { 00:08:51.503 "name": "Nvme$subsystem", 00:08:51.503 "trtype": "$TEST_TRANSPORT", 00:08:51.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.503 "adrfam": "ipv4", 00:08:51.503 "trsvcid": "$NVMF_PORT", 00:08:51.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.503 "hdgst": ${hdgst:-false}, 00:08:51.503 "ddgst": ${ddgst:-false} 00:08:51.503 }, 00:08:51.503 "method": "bdev_nvme_attach_controller" 00:08:51.503 } 00:08:51.503 EOF 00:08:51.503 )") 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61406 00:08:51.503 00:21:07 -- nvmf/common.sh@520 -- # config=() 00:08:51.503 00:21:07 -- nvmf/common.sh@520 -- # local subsystem config 00:08:51.503 00:21:07 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:51.504 00:21:07 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61409 00:08:51.504 00:21:07 -- target/bdev_io_wait.sh@35 -- # sync 00:08:51.504 00:21:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # cat 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:51.504 { 00:08:51.504 "params": { 00:08:51.504 "name": "Nvme$subsystem", 00:08:51.504 "trtype": "$TEST_TRANSPORT", 00:08:51.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.504 "adrfam": "ipv4", 00:08:51.504 "trsvcid": "$NVMF_PORT", 00:08:51.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.504 "hdgst": ${hdgst:-false}, 00:08:51.504 "ddgst": ${ddgst:-false} 00:08:51.504 }, 00:08:51.504 "method": "bdev_nvme_attach_controller" 00:08:51.504 } 00:08:51.504 EOF 00:08:51.504 )") 00:08:51.504 00:21:07 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:51.504 00:21:07 -- nvmf/common.sh@520 -- # config=() 00:08:51.504 00:21:07 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:51.504 00:21:07 -- nvmf/common.sh@520 -- # local subsystem config 00:08:51.504 00:21:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # cat 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:51.504 { 00:08:51.504 "params": { 00:08:51.504 "name": "Nvme$subsystem", 00:08:51.504 "trtype": "$TEST_TRANSPORT", 00:08:51.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.504 "adrfam": "ipv4", 00:08:51.504 "trsvcid": "$NVMF_PORT", 00:08:51.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.504 "hdgst": ${hdgst:-false}, 00:08:51.504 "ddgst": ${ddgst:-false} 00:08:51.504 }, 00:08:51.504 "method": "bdev_nvme_attach_controller" 00:08:51.504 } 00:08:51.504 EOF 00:08:51.504 )") 00:08:51.504 00:21:07 -- nvmf/common.sh@544 -- # jq . 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # cat 00:08:51.504 00:21:07 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:51.504 00:21:07 -- nvmf/common.sh@520 -- # config=() 00:08:51.504 00:21:07 -- nvmf/common.sh@520 -- # local subsystem config 00:08:51.504 00:21:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:51.504 00:21:07 -- nvmf/common.sh@545 -- # IFS=, 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:51.504 { 00:08:51.504 "params": { 00:08:51.504 "name": "Nvme$subsystem", 00:08:51.504 "trtype": "$TEST_TRANSPORT", 00:08:51.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.504 "adrfam": "ipv4", 00:08:51.504 "trsvcid": "$NVMF_PORT", 00:08:51.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.504 "hdgst": ${hdgst:-false}, 00:08:51.504 "ddgst": ${ddgst:-false} 00:08:51.504 }, 00:08:51.504 "method": "bdev_nvme_attach_controller" 00:08:51.504 } 00:08:51.504 EOF 00:08:51.504 )") 00:08:51.504 00:21:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:51.504 "params": { 00:08:51.504 "name": "Nvme1", 00:08:51.504 "trtype": "tcp", 00:08:51.504 "traddr": "10.0.0.2", 00:08:51.504 "adrfam": "ipv4", 00:08:51.504 "trsvcid": "4420", 00:08:51.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.504 "hdgst": false, 00:08:51.504 "ddgst": false 00:08:51.504 }, 00:08:51.504 "method": "bdev_nvme_attach_controller" 00:08:51.504 }' 00:08:51.504 00:21:07 -- nvmf/common.sh@544 -- # jq . 00:08:51.504 00:21:07 -- nvmf/common.sh@542 -- # cat 00:08:51.504 00:21:07 -- nvmf/common.sh@544 -- # jq . 00:08:51.504 00:21:07 -- nvmf/common.sh@545 -- # IFS=, 00:08:51.504 00:21:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:51.504 "params": { 00:08:51.504 "name": "Nvme1", 00:08:51.504 "trtype": "tcp", 00:08:51.504 "traddr": "10.0.0.2", 00:08:51.504 "adrfam": "ipv4", 00:08:51.504 "trsvcid": "4420", 00:08:51.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.504 "hdgst": false, 00:08:51.504 "ddgst": false 00:08:51.504 }, 00:08:51.504 "method": "bdev_nvme_attach_controller" 00:08:51.504 }' 00:08:51.504 00:21:07 -- nvmf/common.sh@545 -- # IFS=, 00:08:51.504 00:21:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:51.504 "params": { 00:08:51.504 "name": "Nvme1", 00:08:51.504 "trtype": "tcp", 00:08:51.504 "traddr": "10.0.0.2", 00:08:51.504 "adrfam": "ipv4", 00:08:51.504 "trsvcid": "4420", 00:08:51.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.504 "hdgst": false, 00:08:51.504 "ddgst": false 00:08:51.504 }, 00:08:51.504 "method": "bdev_nvme_attach_controller" 00:08:51.504 }' 00:08:51.763 00:21:07 -- nvmf/common.sh@544 -- # jq . 00:08:51.763 00:21:07 -- nvmf/common.sh@545 -- # IFS=, 00:08:51.763 00:21:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:51.763 "params": { 00:08:51.763 "name": "Nvme1", 00:08:51.763 "trtype": "tcp", 00:08:51.763 "traddr": "10.0.0.2", 00:08:51.763 "adrfam": "ipv4", 00:08:51.763 "trsvcid": "4420", 00:08:51.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.763 "hdgst": false, 00:08:51.763 "ddgst": false 00:08:51.763 }, 00:08:51.763 "method": "bdev_nvme_attach_controller" 00:08:51.763 }' 00:08:51.763 [2024-09-29 00:21:07.379415] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.763 [2024-09-29 00:21:07.379634] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:51.763 [2024-09-29 00:21:07.386409] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.763 [2024-09-29 00:21:07.386486] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:51.763 00:21:07 -- target/bdev_io_wait.sh@37 -- # wait 61402 00:08:51.763 [2024-09-29 00:21:07.398089] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.763 [2024-09-29 00:21:07.398309] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:51.763 [2024-09-29 00:21:07.409204] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:51.763 [2024-09-29 00:21:07.409563] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:51.763 [2024-09-29 00:21:07.556946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.763 [2024-09-29 00:21:07.598860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.021 [2024-09-29 00:21:07.612781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:52.022 [2024-09-29 00:21:07.644315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.022 [2024-09-29 00:21:07.653239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.022 [2024-09-29 00:21:07.686501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.022 [2024-09-29 00:21:07.696961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:52.022 Running I/O for 1 seconds... 00:08:52.022 [2024-09-29 00:21:07.738831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:52.022 Running I/O for 1 seconds... 00:08:52.022 Running I/O for 1 seconds... 00:08:52.280 Running I/O for 1 seconds... 00:08:53.215 00:08:53.215 Latency(us) 00:08:53.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.215 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:53.215 Nvme1n1 : 1.03 6093.99 23.80 0.00 0.00 20681.54 9949.56 35031.97 00:08:53.215 =================================================================================================================== 00:08:53.215 Total : 6093.99 23.80 0.00 0.00 20681.54 9949.56 35031.97 00:08:53.215 00:08:53.215 Latency(us) 00:08:53.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.215 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:53.215 Nvme1n1 : 1.01 9060.70 35.39 0.00 0.00 14059.70 8460.10 26691.03 00:08:53.215 =================================================================================================================== 00:08:53.215 Total : 9060.70 35.39 0.00 0.00 14059.70 8460.10 26691.03 00:08:53.215 00:08:53.215 Latency(us) 00:08:53.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.215 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:53.215 Nvme1n1 : 1.00 171724.61 670.80 0.00 0.00 742.63 376.09 1161.77 00:08:53.215 =================================================================================================================== 00:08:53.215 Total : 171724.61 670.80 0.00 0.00 742.63 376.09 1161.77 00:08:53.215 00:08:53.215 Latency(us) 00:08:53.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.215 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:53.215 Nvme1n1 : 1.01 6468.21 25.27 0.00 0.00 19731.19 5004.57 42181.35 00:08:53.215 =================================================================================================================== 00:08:53.215 Total : 6468.21 25.27 0.00 0.00 19731.19 5004.57 42181.35 00:08:53.215 00:21:08 -- target/bdev_io_wait.sh@38 -- # wait 61404 00:08:53.215 00:21:08 -- target/bdev_io_wait.sh@39 -- # wait 61406 00:08:53.215 00:21:08 -- target/bdev_io_wait.sh@40 -- # wait 61409 00:08:53.473 00:21:09 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.473 00:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:53.473 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.473 00:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:53.473 00:21:09 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:53.473 00:21:09 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:53.473 00:21:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:53.473 00:21:09 -- nvmf/common.sh@116 -- # sync 00:08:53.473 00:21:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:53.473 00:21:09 -- nvmf/common.sh@119 -- # set +e 00:08:53.473 00:21:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:53.473 00:21:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:53.473 rmmod nvme_tcp 00:08:53.473 rmmod nvme_fabrics 00:08:53.473 rmmod nvme_keyring 00:08:53.473 00:21:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:53.473 00:21:09 -- nvmf/common.sh@123 -- # set -e 00:08:53.473 00:21:09 -- nvmf/common.sh@124 -- # return 0 00:08:53.473 00:21:09 -- nvmf/common.sh@477 -- # '[' -n 61380 ']' 00:08:53.473 00:21:09 -- nvmf/common.sh@478 -- # killprocess 61380 00:08:53.473 00:21:09 -- common/autotest_common.sh@926 -- # '[' -z 61380 ']' 00:08:53.473 00:21:09 -- common/autotest_common.sh@930 -- # kill -0 61380 00:08:53.473 00:21:09 -- common/autotest_common.sh@931 -- # uname 00:08:53.473 00:21:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:53.473 00:21:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61380 00:08:53.473 killing process with pid 61380 00:08:53.473 00:21:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:53.473 00:21:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:53.473 00:21:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61380' 00:08:53.473 00:21:09 -- common/autotest_common.sh@945 -- # kill 61380 00:08:53.473 00:21:09 -- common/autotest_common.sh@950 -- # wait 61380 00:08:53.731 00:21:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:53.731 00:21:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:53.731 00:21:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:53.731 00:21:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.731 00:21:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:53.731 00:21:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.731 00:21:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.731 00:21:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.731 00:21:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:53.731 00:08:53.731 real 0m3.005s 00:08:53.731 user 0m13.466s 00:08:53.731 sys 0m1.866s 00:08:53.731 ************************************ 00:08:53.731 END TEST nvmf_bdev_io_wait 00:08:53.731 ************************************ 00:08:53.731 00:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.731 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.731 00:21:09 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:53.731 00:21:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:53.731 00:21:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.731 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.731 ************************************ 00:08:53.731 START TEST nvmf_queue_depth 00:08:53.731 ************************************ 00:08:53.731 00:21:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:53.731 * Looking for test storage... 00:08:53.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.731 00:21:09 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.731 00:21:09 -- nvmf/common.sh@7 -- # uname -s 00:08:53.731 00:21:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.731 00:21:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.731 00:21:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.731 00:21:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.731 00:21:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.731 00:21:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.731 00:21:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.731 00:21:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.731 00:21:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.731 00:21:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.731 00:21:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:08:53.732 00:21:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:08:53.732 00:21:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.732 00:21:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.732 00:21:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.732 00:21:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.732 00:21:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.732 00:21:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.732 00:21:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.732 00:21:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.732 00:21:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.732 00:21:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.732 00:21:09 -- paths/export.sh@5 -- # export PATH 00:08:53.732 00:21:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.732 00:21:09 -- nvmf/common.sh@46 -- # : 0 00:08:53.732 00:21:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:53.732 00:21:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:53.732 00:21:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:53.732 00:21:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.732 00:21:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.732 00:21:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:53.732 00:21:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:53.732 00:21:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:53.732 00:21:09 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:53.732 00:21:09 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:53.732 00:21:09 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.732 00:21:09 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:53.732 00:21:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:53.732 00:21:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.732 00:21:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:53.732 00:21:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:53.732 00:21:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:53.732 00:21:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.732 00:21:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.732 00:21:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.990 00:21:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:53.990 00:21:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:53.990 00:21:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:53.990 00:21:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:53.990 00:21:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:53.990 00:21:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:53.990 00:21:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.990 00:21:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.991 00:21:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.991 00:21:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:53.991 00:21:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.991 00:21:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.991 00:21:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.991 00:21:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.991 00:21:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.991 00:21:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.991 00:21:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.991 00:21:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.991 00:21:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:53.991 00:21:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:53.991 Cannot find device "nvmf_tgt_br" 00:08:53.991 00:21:09 -- nvmf/common.sh@154 -- # true 00:08:53.991 00:21:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.991 Cannot find device "nvmf_tgt_br2" 00:08:53.991 00:21:09 -- nvmf/common.sh@155 -- # true 00:08:53.991 00:21:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:53.991 00:21:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:53.991 Cannot find device "nvmf_tgt_br" 00:08:53.991 00:21:09 -- nvmf/common.sh@157 -- # true 00:08:53.991 00:21:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:53.991 Cannot find device "nvmf_tgt_br2" 00:08:53.991 00:21:09 -- nvmf/common.sh@158 -- # true 00:08:53.991 00:21:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:53.991 00:21:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:53.991 00:21:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.991 00:21:09 -- nvmf/common.sh@161 -- # true 00:08:53.991 00:21:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.991 00:21:09 -- nvmf/common.sh@162 -- # true 00:08:53.991 00:21:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.991 00:21:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.991 00:21:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.991 00:21:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.991 00:21:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.991 00:21:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.991 00:21:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.991 00:21:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.991 00:21:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.991 00:21:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:53.991 00:21:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:53.991 00:21:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:53.991 00:21:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:53.991 00:21:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.991 00:21:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.991 00:21:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.991 00:21:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:54.249 00:21:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:54.249 00:21:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.249 00:21:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.249 00:21:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.249 00:21:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.249 00:21:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.249 00:21:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:54.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:08:54.250 00:08:54.250 --- 10.0.0.2 ping statistics --- 00:08:54.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.250 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:54.250 00:21:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:54.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:08:54.250 00:08:54.250 --- 10.0.0.3 ping statistics --- 00:08:54.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.250 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:54.250 00:21:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:54.250 00:08:54.250 --- 10.0.0.1 ping statistics --- 00:08:54.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.250 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:54.250 00:21:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.250 00:21:09 -- nvmf/common.sh@421 -- # return 0 00:08:54.250 00:21:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:54.250 00:21:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.250 00:21:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:54.250 00:21:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:54.250 00:21:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.250 00:21:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:54.250 00:21:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:54.250 00:21:09 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:54.250 00:21:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:54.250 00:21:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:54.250 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:08:54.250 00:21:09 -- nvmf/common.sh@469 -- # nvmfpid=61616 00:08:54.250 00:21:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.250 00:21:09 -- nvmf/common.sh@470 -- # waitforlisten 61616 00:08:54.250 00:21:09 -- common/autotest_common.sh@819 -- # '[' -z 61616 ']' 00:08:54.250 00:21:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.250 00:21:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.250 00:21:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.250 00:21:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.250 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:08:54.250 [2024-09-29 00:21:09.982562] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:54.250 [2024-09-29 00:21:09.982646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.508 [2024-09-29 00:21:10.118095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.508 [2024-09-29 00:21:10.172008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.508 [2024-09-29 00:21:10.172165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.508 [2024-09-29 00:21:10.172181] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.508 [2024-09-29 00:21:10.172189] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.508 [2024-09-29 00:21:10.172214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.442 00:21:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:55.442 00:21:10 -- common/autotest_common.sh@852 -- # return 0 00:08:55.442 00:21:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:55.442 00:21:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:55.442 00:21:10 -- common/autotest_common.sh@10 -- # set +x 00:08:55.442 00:21:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.442 00:21:10 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.442 00:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.442 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.442 [2024-09-29 00:21:11.006231] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.442 00:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.442 00:21:11 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.442 00:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.442 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.442 Malloc0 00:08:55.442 00:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.442 00:21:11 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.442 00:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.442 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.442 00:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.443 00:21:11 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.443 00:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.443 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.443 00:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.443 00:21:11 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.443 00:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.443 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.443 [2024-09-29 00:21:11.060818] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.443 00:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.443 00:21:11 -- target/queue_depth.sh@30 -- # bdevperf_pid=61648 00:08:55.443 00:21:11 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:55.443 00:21:11 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.443 00:21:11 -- target/queue_depth.sh@33 -- # waitforlisten 61648 /var/tmp/bdevperf.sock 00:08:55.443 00:21:11 -- common/autotest_common.sh@819 -- # '[' -z 61648 ']' 00:08:55.443 00:21:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.443 00:21:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:55.443 00:21:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.443 00:21:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:55.443 00:21:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.443 [2024-09-29 00:21:11.118484] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:08:55.443 [2024-09-29 00:21:11.119054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61648 ] 00:08:55.443 [2024-09-29 00:21:11.256212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.701 [2024-09-29 00:21:11.311100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.267 00:21:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.267 00:21:12 -- common/autotest_common.sh@852 -- # return 0 00:08:56.267 00:21:12 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:56.267 00:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.267 00:21:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.525 NVMe0n1 00:08:56.525 00:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.525 00:21:12 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:56.525 Running I/O for 10 seconds... 00:09:06.496 00:09:06.496 Latency(us) 00:09:06.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.496 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:06.496 Verification LBA range: start 0x0 length 0x4000 00:09:06.496 NVMe0n1 : 10.06 15484.33 60.49 0.00 0.00 65898.56 13166.78 61008.06 00:09:06.496 =================================================================================================================== 00:09:06.496 Total : 15484.33 60.49 0.00 0.00 65898.56 13166.78 61008.06 00:09:06.755 0 00:09:06.755 00:21:22 -- target/queue_depth.sh@39 -- # killprocess 61648 00:09:06.755 00:21:22 -- common/autotest_common.sh@926 -- # '[' -z 61648 ']' 00:09:06.755 00:21:22 -- common/autotest_common.sh@930 -- # kill -0 61648 00:09:06.755 00:21:22 -- common/autotest_common.sh@931 -- # uname 00:09:06.755 00:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:06.755 00:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61648 00:09:06.755 killing process with pid 61648 00:09:06.755 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.755 00:09:06.755 Latency(us) 00:09:06.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.755 =================================================================================================================== 00:09:06.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.755 00:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:06.755 00:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:06.755 00:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61648' 00:09:06.755 00:21:22 -- common/autotest_common.sh@945 -- # kill 61648 00:09:06.755 00:21:22 -- common/autotest_common.sh@950 -- # wait 61648 00:09:06.755 00:21:22 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:06.755 00:21:22 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:06.755 00:21:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:06.755 00:21:22 -- nvmf/common.sh@116 -- # sync 00:09:07.015 00:21:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:07.015 00:21:22 -- nvmf/common.sh@119 -- # set +e 00:09:07.015 00:21:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:07.015 00:21:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:07.015 rmmod nvme_tcp 00:09:07.015 rmmod nvme_fabrics 00:09:07.015 rmmod nvme_keyring 00:09:07.015 00:21:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:07.015 00:21:22 -- nvmf/common.sh@123 -- # set -e 00:09:07.015 00:21:22 -- nvmf/common.sh@124 -- # return 0 00:09:07.015 00:21:22 -- nvmf/common.sh@477 -- # '[' -n 61616 ']' 00:09:07.015 00:21:22 -- nvmf/common.sh@478 -- # killprocess 61616 00:09:07.015 00:21:22 -- common/autotest_common.sh@926 -- # '[' -z 61616 ']' 00:09:07.015 00:21:22 -- common/autotest_common.sh@930 -- # kill -0 61616 00:09:07.015 00:21:22 -- common/autotest_common.sh@931 -- # uname 00:09:07.015 00:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:07.015 00:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61616 00:09:07.015 killing process with pid 61616 00:09:07.015 00:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:07.015 00:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:07.015 00:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61616' 00:09:07.015 00:21:22 -- common/autotest_common.sh@945 -- # kill 61616 00:09:07.015 00:21:22 -- common/autotest_common.sh@950 -- # wait 61616 00:09:07.275 00:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:07.275 00:21:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:07.275 00:21:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:07.275 00:21:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.275 00:21:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:07.275 00:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.275 00:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.275 00:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.275 00:21:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:07.275 ************************************ 00:09:07.275 END TEST nvmf_queue_depth 00:09:07.275 ************************************ 00:09:07.275 00:09:07.275 real 0m13.447s 00:09:07.275 user 0m23.657s 00:09:07.275 sys 0m1.824s 00:09:07.275 00:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.275 00:21:22 -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 00:21:22 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:07.275 00:21:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:07.275 00:21:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.275 00:21:22 -- common/autotest_common.sh@10 -- # set +x 00:09:07.275 ************************************ 00:09:07.275 START TEST nvmf_multipath 00:09:07.275 ************************************ 00:09:07.275 00:21:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:07.275 * Looking for test storage... 00:09:07.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:07.275 00:21:23 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:07.275 00:21:23 -- nvmf/common.sh@7 -- # uname -s 00:09:07.275 00:21:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.275 00:21:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.275 00:21:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.275 00:21:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.275 00:21:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.275 00:21:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.275 00:21:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.275 00:21:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.275 00:21:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.275 00:21:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.275 00:21:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:07.275 00:21:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:07.275 00:21:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.275 00:21:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.275 00:21:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:07.275 00:21:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.275 00:21:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.275 00:21:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.275 00:21:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.275 00:21:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.275 00:21:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.275 00:21:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.275 00:21:23 -- paths/export.sh@5 -- # export PATH 00:09:07.275 00:21:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.275 00:21:23 -- nvmf/common.sh@46 -- # : 0 00:09:07.275 00:21:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:07.275 00:21:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:07.275 00:21:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:07.275 00:21:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.275 00:21:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.275 00:21:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:07.275 00:21:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:07.275 00:21:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:07.275 00:21:23 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.275 00:21:23 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.275 00:21:23 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:07.275 00:21:23 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.275 00:21:23 -- target/multipath.sh@43 -- # nvmftestinit 00:09:07.275 00:21:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:07.275 00:21:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.275 00:21:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:07.275 00:21:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:07.275 00:21:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:07.275 00:21:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.275 00:21:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.275 00:21:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.275 00:21:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:07.275 00:21:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:07.275 00:21:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:07.275 00:21:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:07.275 00:21:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:07.275 00:21:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:07.275 00:21:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.275 00:21:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.275 00:21:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:07.275 00:21:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:07.275 00:21:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:07.275 00:21:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:07.275 00:21:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:07.275 00:21:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.275 00:21:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:07.275 00:21:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:07.275 00:21:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:07.275 00:21:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:07.275 00:21:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:07.275 00:21:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:07.275 Cannot find device "nvmf_tgt_br" 00:09:07.275 00:21:23 -- nvmf/common.sh@154 -- # true 00:09:07.275 00:21:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:07.535 Cannot find device "nvmf_tgt_br2" 00:09:07.535 00:21:23 -- nvmf/common.sh@155 -- # true 00:09:07.535 00:21:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:07.535 00:21:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:07.535 Cannot find device "nvmf_tgt_br" 00:09:07.535 00:21:23 -- nvmf/common.sh@157 -- # true 00:09:07.535 00:21:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:07.535 Cannot find device "nvmf_tgt_br2" 00:09:07.535 00:21:23 -- nvmf/common.sh@158 -- # true 00:09:07.535 00:21:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:07.535 00:21:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:07.535 00:21:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:07.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.535 00:21:23 -- nvmf/common.sh@161 -- # true 00:09:07.535 00:21:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:07.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:07.535 00:21:23 -- nvmf/common.sh@162 -- # true 00:09:07.535 00:21:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:07.535 00:21:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:07.535 00:21:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:07.535 00:21:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:07.535 00:21:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:07.535 00:21:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:07.535 00:21:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:07.535 00:21:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:07.535 00:21:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:07.535 00:21:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:07.535 00:21:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:07.535 00:21:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:07.535 00:21:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:07.535 00:21:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:07.535 00:21:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:07.535 00:21:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:07.535 00:21:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:07.535 00:21:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:07.535 00:21:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:07.535 00:21:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:07.794 00:21:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:07.794 00:21:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:07.794 00:21:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:07.794 00:21:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:07.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:07.794 00:09:07.794 --- 10.0.0.2 ping statistics --- 00:09:07.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.794 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:07.794 00:21:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:07.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:07.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:07.794 00:09:07.794 --- 10.0.0.3 ping statistics --- 00:09:07.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.794 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:07.794 00:21:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:07.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:09:07.794 00:09:07.794 --- 10.0.0.1 ping statistics --- 00:09:07.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.794 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:09:07.794 00:21:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.794 00:21:23 -- nvmf/common.sh@421 -- # return 0 00:09:07.794 00:21:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:07.794 00:21:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.794 00:21:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:07.794 00:21:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:07.794 00:21:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.794 00:21:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:07.794 00:21:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:07.794 00:21:23 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:07.794 00:21:23 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:07.794 00:21:23 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:07.794 00:21:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:07.794 00:21:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:07.794 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:09:07.794 00:21:23 -- nvmf/common.sh@469 -- # nvmfpid=61964 00:09:07.794 00:21:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.794 00:21:23 -- nvmf/common.sh@470 -- # waitforlisten 61964 00:09:07.794 00:21:23 -- common/autotest_common.sh@819 -- # '[' -z 61964 ']' 00:09:07.794 00:21:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.794 00:21:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:07.794 00:21:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.794 00:21:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:07.794 00:21:23 -- common/autotest_common.sh@10 -- # set +x 00:09:07.794 [2024-09-29 00:21:23.493072] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:07.794 [2024-09-29 00:21:23.493463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.794 [2024-09-29 00:21:23.633808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.053 [2024-09-29 00:21:23.687307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.053 [2024-09-29 00:21:23.687486] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.053 [2024-09-29 00:21:23.687501] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.053 [2024-09-29 00:21:23.687509] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.053 [2024-09-29 00:21:23.687607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.053 [2024-09-29 00:21:23.688469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.053 [2024-09-29 00:21:23.688638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.053 [2024-09-29 00:21:23.688644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.988 00:21:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:08.988 00:21:24 -- common/autotest_common.sh@852 -- # return 0 00:09:08.988 00:21:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:08.988 00:21:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:08.988 00:21:24 -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 00:21:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.988 00:21:24 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.988 [2024-09-29 00:21:24.796310] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.988 00:21:24 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:09.554 Malloc0 00:09:09.555 00:21:25 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:09.555 00:21:25 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.812 00:21:25 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.070 [2024-09-29 00:21:25.823264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.070 00:21:25 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.328 [2024-09-29 00:21:26.059553] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.328 00:21:26 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:10.586 00:21:26 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:10.586 00:21:26 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.586 00:21:26 -- common/autotest_common.sh@1177 -- # local i=0 00:09:10.586 00:21:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.586 00:21:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:10.586 00:21:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:13.113 00:21:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:13.113 00:21:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:13.113 00:21:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.113 00:21:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:13.113 00:21:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.113 00:21:28 -- common/autotest_common.sh@1187 -- # return 0 00:09:13.113 00:21:28 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:13.113 00:21:28 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:13.113 00:21:28 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:13.113 00:21:28 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:13.113 00:21:28 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:13.113 00:21:28 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:13.113 00:21:28 -- target/multipath.sh@38 -- # return 0 00:09:13.113 00:21:28 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:13.113 00:21:28 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:13.113 00:21:28 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:13.113 00:21:28 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:13.113 00:21:28 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:13.113 00:21:28 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:13.113 00:21:28 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:13.113 00:21:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:13.113 00:21:28 -- target/multipath.sh@22 -- # local timeout=20 00:09:13.113 00:21:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.113 00:21:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.113 00:21:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.113 00:21:28 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:13.113 00:21:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:13.113 00:21:28 -- target/multipath.sh@22 -- # local timeout=20 00:09:13.113 00:21:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.113 00:21:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.113 00:21:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.113 00:21:28 -- target/multipath.sh@85 -- # echo numa 00:09:13.113 00:21:28 -- target/multipath.sh@88 -- # fio_pid=62059 00:09:13.113 00:21:28 -- target/multipath.sh@90 -- # sleep 1 00:09:13.113 00:21:28 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:13.113 [global] 00:09:13.113 thread=1 00:09:13.113 invalidate=1 00:09:13.113 rw=randrw 00:09:13.113 time_based=1 00:09:13.113 runtime=6 00:09:13.113 ioengine=libaio 00:09:13.113 direct=1 00:09:13.113 bs=4096 00:09:13.113 iodepth=128 00:09:13.113 norandommap=0 00:09:13.113 numjobs=1 00:09:13.113 00:09:13.113 verify_dump=1 00:09:13.113 verify_backlog=512 00:09:13.113 verify_state_save=0 00:09:13.113 do_verify=1 00:09:13.113 verify=crc32c-intel 00:09:13.113 [job0] 00:09:13.113 filename=/dev/nvme0n1 00:09:13.113 Could not set queue depth (nvme0n1) 00:09:13.113 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.113 fio-3.35 00:09:13.113 Starting 1 thread 00:09:13.679 00:21:29 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:13.937 00:21:29 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:14.195 00:21:29 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:14.195 00:21:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:14.195 00:21:29 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.195 00:21:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.195 00:21:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.195 00:21:29 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.195 00:21:29 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:14.195 00:21:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:14.195 00:21:29 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.195 00:21:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.195 00:21:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.195 00:21:29 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.195 00:21:29 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:14.453 00:21:30 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:14.713 00:21:30 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:14.713 00:21:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:14.713 00:21:30 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.713 00:21:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.713 00:21:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.713 00:21:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:14.713 00:21:30 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:14.713 00:21:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:14.713 00:21:30 -- target/multipath.sh@22 -- # local timeout=20 00:09:14.713 00:21:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.713 00:21:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.713 00:21:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.713 00:21:30 -- target/multipath.sh@104 -- # wait 62059 00:09:18.922 00:09:18.922 job0: (groupid=0, jobs=1): err= 0: pid=62080: Sun Sep 29 00:21:34 2024 00:09:18.922 read: IOPS=10.6k, BW=41.3MiB/s (43.4MB/s)(248MiB/6003msec) 00:09:18.922 slat (usec): min=4, max=7804, avg=54.52, stdev=225.30 00:09:18.922 clat (usec): min=1395, max=15867, avg=8111.76, stdev=1376.18 00:09:18.922 lat (usec): min=1405, max=15899, avg=8166.28, stdev=1379.73 00:09:18.922 clat percentiles (usec): 00:09:18.922 | 1.00th=[ 4555], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7242], 00:09:18.922 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8225], 00:09:18.922 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11207], 00:09:18.922 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13435], 99.95th=[13829], 00:09:18.922 | 99.99th=[14222] 00:09:18.922 bw ( KiB/s): min= 9408, max=28536, per=52.92%, avg=22405.09, stdev=6523.45, samples=11 00:09:18.922 iops : min= 2352, max= 7134, avg=5601.27, stdev=1630.86, samples=11 00:09:18.922 write: IOPS=6471, BW=25.3MiB/s (26.5MB/s)(133MiB/5262msec); 0 zone resets 00:09:18.922 slat (usec): min=15, max=1843, avg=64.87, stdev=158.07 00:09:18.922 clat (usec): min=1233, max=17614, avg=7228.53, stdev=1227.67 00:09:18.922 lat (usec): min=1256, max=17637, avg=7293.40, stdev=1232.37 00:09:18.922 clat percentiles (usec): 00:09:18.922 | 1.00th=[ 3458], 5.00th=[ 4490], 10.00th=[ 5932], 20.00th=[ 6652], 00:09:18.922 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:09:18.922 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8717], 00:09:18.922 | 99.00th=[10814], 99.50th=[11600], 99.90th=[12387], 99.95th=[13173], 00:09:18.922 | 99.99th=[14222] 00:09:18.922 bw ( KiB/s): min= 9832, max=28416, per=86.75%, avg=22457.45, stdev=6231.60, samples=11 00:09:18.922 iops : min= 2458, max= 7104, avg=5614.36, stdev=1557.90, samples=11 00:09:18.922 lat (msec) : 2=0.03%, 4=1.26%, 10=93.49%, 20=5.22% 00:09:18.922 cpu : usr=5.33%, sys=21.42%, ctx=5529, majf=0, minf=114 00:09:18.922 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:18.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.922 issued rwts: total=63537,34054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.922 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.922 00:09:18.922 Run status group 0 (all jobs): 00:09:18.922 READ: bw=41.3MiB/s (43.4MB/s), 41.3MiB/s-41.3MiB/s (43.4MB/s-43.4MB/s), io=248MiB (260MB), run=6003-6003msec 00:09:18.922 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=133MiB (139MB), run=5262-5262msec 00:09:18.922 00:09:18.922 Disk stats (read/write): 00:09:18.922 nvme0n1: ios=62626/33502, merge=0/0, ticks=485967/228048, in_queue=714015, util=98.53% 00:09:18.922 00:21:34 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:19.180 00:21:34 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:19.746 00:21:35 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:19.746 00:21:35 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:19.746 00:21:35 -- target/multipath.sh@22 -- # local timeout=20 00:09:19.746 00:21:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:19.746 00:21:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:19.746 00:21:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.746 00:21:35 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:19.746 00:21:35 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:19.746 00:21:35 -- target/multipath.sh@22 -- # local timeout=20 00:09:19.746 00:21:35 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:19.746 00:21:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.746 00:21:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.746 00:21:35 -- target/multipath.sh@113 -- # echo round-robin 00:09:19.746 00:21:35 -- target/multipath.sh@116 -- # fio_pid=62161 00:09:19.746 00:21:35 -- target/multipath.sh@118 -- # sleep 1 00:09:19.746 00:21:35 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:19.746 [global] 00:09:19.746 thread=1 00:09:19.746 invalidate=1 00:09:19.746 rw=randrw 00:09:19.746 time_based=1 00:09:19.746 runtime=6 00:09:19.746 ioengine=libaio 00:09:19.746 direct=1 00:09:19.746 bs=4096 00:09:19.746 iodepth=128 00:09:19.746 norandommap=0 00:09:19.746 numjobs=1 00:09:19.746 00:09:19.746 verify_dump=1 00:09:19.746 verify_backlog=512 00:09:19.746 verify_state_save=0 00:09:19.746 do_verify=1 00:09:19.746 verify=crc32c-intel 00:09:19.746 [job0] 00:09:19.746 filename=/dev/nvme0n1 00:09:19.746 Could not set queue depth (nvme0n1) 00:09:19.746 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:19.746 fio-3.35 00:09:19.746 Starting 1 thread 00:09:20.680 00:21:36 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:20.938 00:21:36 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:21.196 00:21:36 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:21.196 00:21:36 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:21.196 00:21:36 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.196 00:21:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.196 00:21:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.196 00:21:36 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.196 00:21:36 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:21.196 00:21:36 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:21.196 00:21:36 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.196 00:21:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.196 00:21:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.196 00:21:36 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.196 00:21:36 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:21.454 00:21:37 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:21.712 00:21:37 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:21.712 00:21:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:21.712 00:21:37 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.712 00:21:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.712 00:21:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.712 00:21:37 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.712 00:21:37 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:21.712 00:21:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:21.712 00:21:37 -- target/multipath.sh@22 -- # local timeout=20 00:09:21.712 00:21:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.712 00:21:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.712 00:21:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.712 00:21:37 -- target/multipath.sh@132 -- # wait 62161 00:09:25.901 00:09:25.901 job0: (groupid=0, jobs=1): err= 0: pid=62182: Sun Sep 29 00:21:41 2024 00:09:25.901 read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(278MiB/6002msec) 00:09:25.901 slat (usec): min=4, max=6099, avg=41.71, stdev=194.62 00:09:25.901 clat (usec): min=279, max=16729, avg=7322.95, stdev=2028.36 00:09:25.901 lat (usec): min=293, max=16737, avg=7364.66, stdev=2041.71 00:09:25.901 clat percentiles (usec): 00:09:25.901 | 1.00th=[ 1795], 5.00th=[ 3720], 10.00th=[ 4686], 20.00th=[ 5735], 00:09:25.901 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:09:25.901 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[11076], 00:09:25.901 | 99.00th=[12649], 99.50th=[13304], 99.90th=[15533], 99.95th=[15795], 00:09:25.901 | 99.99th=[16188] 00:09:25.901 bw ( KiB/s): min=10576, max=40912, per=54.60%, avg=25896.00, stdev=8122.51, samples=11 00:09:25.901 iops : min= 2644, max=10228, avg=6474.00, stdev=2030.63, samples=11 00:09:25.901 write: IOPS=7039, BW=27.5MiB/s (28.8MB/s)(149MiB/5417msec); 0 zone resets 00:09:25.901 slat (usec): min=12, max=2064, avg=53.30, stdev=132.37 00:09:25.901 clat (usec): min=401, max=15547, avg=6312.61, stdev=1816.25 00:09:25.901 lat (usec): min=443, max=15600, avg=6365.91, stdev=1828.04 00:09:25.901 clat percentiles (usec): 00:09:25.901 | 1.00th=[ 1696], 5.00th=[ 3064], 10.00th=[ 3589], 20.00th=[ 4424], 00:09:25.901 | 30.00th=[ 5669], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7111], 00:09:25.902 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8291], 00:09:25.902 | 99.00th=[10814], 99.50th=[11469], 99.90th=[13173], 99.95th=[13435], 00:09:25.902 | 99.99th=[15139] 00:09:25.902 bw ( KiB/s): min=11136, max=40056, per=91.89%, avg=25875.64, stdev=7881.58, samples=11 00:09:25.902 iops : min= 2784, max=10014, avg=6468.91, stdev=1970.39, samples=11 00:09:25.902 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.13% 00:09:25.902 lat (msec) : 2=1.11%, 4=7.99%, 10=85.29%, 20=5.41% 00:09:25.902 cpu : usr=6.08%, sys=23.05%, ctx=6375, majf=0, minf=114 00:09:25.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:25.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.902 issued rwts: total=71165,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.902 00:09:25.902 Run status group 0 (all jobs): 00:09:25.902 READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=278MiB (291MB), run=6002-6002msec 00:09:25.902 WRITE: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=149MiB (156MB), run=5417-5417msec 00:09:25.902 00:09:25.902 Disk stats (read/write): 00:09:25.902 nvme0n1: ios=69849/37919, merge=0/0, ticks=487971/224033, in_queue=712004, util=98.68% 00:09:25.902 00:21:41 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:25.902 00:21:41 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.902 00:21:41 -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.902 00:21:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:25.902 00:21:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.902 00:21:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.902 00:21:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:25.902 00:21:41 -- common/autotest_common.sh@1210 -- # return 0 00:09:25.902 00:21:41 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.162 00:21:41 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:26.162 00:21:41 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:26.162 00:21:41 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:26.162 00:21:41 -- target/multipath.sh@144 -- # nvmftestfini 00:09:26.162 00:21:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:26.162 00:21:41 -- nvmf/common.sh@116 -- # sync 00:09:26.421 00:21:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:26.421 00:21:42 -- nvmf/common.sh@119 -- # set +e 00:09:26.421 00:21:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:26.421 00:21:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:26.421 rmmod nvme_tcp 00:09:26.421 rmmod nvme_fabrics 00:09:26.421 rmmod nvme_keyring 00:09:26.421 00:21:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:26.421 00:21:42 -- nvmf/common.sh@123 -- # set -e 00:09:26.421 00:21:42 -- nvmf/common.sh@124 -- # return 0 00:09:26.421 00:21:42 -- nvmf/common.sh@477 -- # '[' -n 61964 ']' 00:09:26.421 00:21:42 -- nvmf/common.sh@478 -- # killprocess 61964 00:09:26.421 00:21:42 -- common/autotest_common.sh@926 -- # '[' -z 61964 ']' 00:09:26.421 00:21:42 -- common/autotest_common.sh@930 -- # kill -0 61964 00:09:26.421 00:21:42 -- common/autotest_common.sh@931 -- # uname 00:09:26.421 00:21:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:26.421 00:21:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61964 00:09:26.421 killing process with pid 61964 00:09:26.421 00:21:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:26.421 00:21:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:26.421 00:21:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61964' 00:09:26.421 00:21:42 -- common/autotest_common.sh@945 -- # kill 61964 00:09:26.421 00:21:42 -- common/autotest_common.sh@950 -- # wait 61964 00:09:26.681 00:21:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:26.681 00:21:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:26.681 00:21:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:26.681 00:21:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.681 00:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.681 00:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.681 00:21:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:26.681 00:09:26.681 real 0m19.368s 00:09:26.681 user 1m12.959s 00:09:26.681 sys 0m9.649s 00:09:26.681 00:21:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.681 ************************************ 00:09:26.681 00:21:42 -- common/autotest_common.sh@10 -- # set +x 00:09:26.681 END TEST nvmf_multipath 00:09:26.681 ************************************ 00:09:26.681 00:21:42 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.681 00:21:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:26.681 00:21:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:26.681 00:21:42 -- common/autotest_common.sh@10 -- # set +x 00:09:26.681 ************************************ 00:09:26.681 START TEST nvmf_zcopy 00:09:26.681 ************************************ 00:09:26.681 00:21:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.681 * Looking for test storage... 00:09:26.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.681 00:21:42 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.681 00:21:42 -- nvmf/common.sh@7 -- # uname -s 00:09:26.681 00:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.681 00:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.681 00:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.681 00:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.681 00:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.681 00:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.681 00:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.681 00:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.681 00:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.681 00:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:26.681 00:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:26.681 00:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.681 00:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.681 00:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.681 00:21:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.681 00:21:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.681 00:21:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.681 00:21:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.681 00:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.681 00:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.681 00:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.681 00:21:42 -- paths/export.sh@5 -- # export PATH 00:09:26.681 00:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.681 00:21:42 -- nvmf/common.sh@46 -- # : 0 00:09:26.681 00:21:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:26.681 00:21:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:26.681 00:21:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:26.681 00:21:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.681 00:21:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.681 00:21:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:26.681 00:21:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:26.681 00:21:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:26.681 00:21:42 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:26.681 00:21:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:26.681 00:21:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.681 00:21:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:26.681 00:21:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:26.681 00:21:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:26.681 00:21:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.681 00:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.681 00:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.681 00:21:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:26.681 00:21:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:26.681 00:21:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.681 00:21:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.681 00:21:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:26.681 00:21:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:26.681 00:21:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.681 00:21:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.681 00:21:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.682 00:21:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.682 00:21:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.682 00:21:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.682 00:21:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.682 00:21:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.682 00:21:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:26.682 00:21:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:26.682 Cannot find device "nvmf_tgt_br" 00:09:26.682 00:21:42 -- nvmf/common.sh@154 -- # true 00:09:26.682 00:21:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.941 Cannot find device "nvmf_tgt_br2" 00:09:26.941 00:21:42 -- nvmf/common.sh@155 -- # true 00:09:26.941 00:21:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:26.941 00:21:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:26.941 Cannot find device "nvmf_tgt_br" 00:09:26.941 00:21:42 -- nvmf/common.sh@157 -- # true 00:09:26.941 00:21:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:26.941 Cannot find device "nvmf_tgt_br2" 00:09:26.941 00:21:42 -- nvmf/common.sh@158 -- # true 00:09:26.941 00:21:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:26.941 00:21:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:26.941 00:21:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.941 00:21:42 -- nvmf/common.sh@161 -- # true 00:09:26.941 00:21:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.941 00:21:42 -- nvmf/common.sh@162 -- # true 00:09:26.941 00:21:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.941 00:21:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.941 00:21:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:26.941 00:21:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:26.941 00:21:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:26.941 00:21:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:26.941 00:21:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:26.941 00:21:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:26.941 00:21:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:26.941 00:21:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:26.941 00:21:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:26.941 00:21:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:26.941 00:21:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:26.941 00:21:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:26.941 00:21:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:26.941 00:21:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:26.941 00:21:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:26.941 00:21:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:26.941 00:21:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:26.941 00:21:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:26.941 00:21:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:26.941 00:21:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:26.941 00:21:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:26.941 00:21:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:26.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:26.941 00:09:26.941 --- 10.0.0.2 ping statistics --- 00:09:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.941 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:26.941 00:21:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:26.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:26.941 00:09:26.941 --- 10.0.0.3 ping statistics --- 00:09:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.941 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:26.941 00:21:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:26.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:09:26.941 00:09:26.941 --- 10.0.0.1 ping statistics --- 00:09:26.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.941 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:26.941 00:21:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.941 00:21:42 -- nvmf/common.sh@421 -- # return 0 00:09:26.941 00:21:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:26.941 00:21:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.941 00:21:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:26.941 00:21:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:26.941 00:21:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.941 00:21:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:26.941 00:21:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:27.201 00:21:42 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:27.201 00:21:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:27.201 00:21:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:27.201 00:21:42 -- common/autotest_common.sh@10 -- # set +x 00:09:27.201 00:21:42 -- nvmf/common.sh@469 -- # nvmfpid=62430 00:09:27.201 00:21:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:27.201 00:21:42 -- nvmf/common.sh@470 -- # waitforlisten 62430 00:09:27.201 00:21:42 -- common/autotest_common.sh@819 -- # '[' -z 62430 ']' 00:09:27.201 00:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.201 00:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.201 00:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.201 00:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.201 00:21:42 -- common/autotest_common.sh@10 -- # set +x 00:09:27.201 [2024-09-29 00:21:42.845448] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:27.201 [2024-09-29 00:21:42.845528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.201 [2024-09-29 00:21:42.980825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.201 [2024-09-29 00:21:43.047636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:27.201 [2024-09-29 00:21:43.047813] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.201 [2024-09-29 00:21:43.047829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.201 [2024-09-29 00:21:43.047839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.201 [2024-09-29 00:21:43.047877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.140 00:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.140 00:21:43 -- common/autotest_common.sh@852 -- # return 0 00:09:28.140 00:21:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:28.140 00:21:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 00:21:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.140 00:21:43 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:28.140 00:21:43 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:28.140 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 [2024-09-29 00:21:43.772439] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.140 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.140 00:21:43 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:28.140 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.140 00:21:43 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.140 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 [2024-09-29 00:21:43.788573] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.140 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.140 00:21:43 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.140 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.140 00:21:43 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:28.140 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 malloc0 00:09:28.140 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.140 00:21:43 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:28.140 00:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.140 00:21:43 -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 00:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.140 00:21:43 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:28.140 00:21:43 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:28.140 00:21:43 -- nvmf/common.sh@520 -- # config=() 00:09:28.140 00:21:43 -- nvmf/common.sh@520 -- # local subsystem config 00:09:28.140 00:21:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:28.140 00:21:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:28.140 { 00:09:28.140 "params": { 00:09:28.140 "name": "Nvme$subsystem", 00:09:28.140 "trtype": "$TEST_TRANSPORT", 00:09:28.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.140 "adrfam": "ipv4", 00:09:28.140 "trsvcid": "$NVMF_PORT", 00:09:28.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.140 "hdgst": ${hdgst:-false}, 00:09:28.140 "ddgst": ${ddgst:-false} 00:09:28.140 }, 00:09:28.140 "method": "bdev_nvme_attach_controller" 00:09:28.140 } 00:09:28.140 EOF 00:09:28.140 )") 00:09:28.140 00:21:43 -- nvmf/common.sh@542 -- # cat 00:09:28.140 00:21:43 -- nvmf/common.sh@544 -- # jq . 00:09:28.140 00:21:43 -- nvmf/common.sh@545 -- # IFS=, 00:09:28.140 00:21:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:28.140 "params": { 00:09:28.140 "name": "Nvme1", 00:09:28.140 "trtype": "tcp", 00:09:28.140 "traddr": "10.0.0.2", 00:09:28.140 "adrfam": "ipv4", 00:09:28.140 "trsvcid": "4420", 00:09:28.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:28.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:28.140 "hdgst": false, 00:09:28.140 "ddgst": false 00:09:28.140 }, 00:09:28.140 "method": "bdev_nvme_attach_controller" 00:09:28.140 }' 00:09:28.140 [2024-09-29 00:21:43.876433] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:28.140 [2024-09-29 00:21:43.876546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62463 ] 00:09:28.414 [2024-09-29 00:21:44.016397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.414 [2024-09-29 00:21:44.086425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.414 Running I/O for 10 seconds... 00:09:38.402 00:09:38.402 Latency(us) 00:09:38.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.402 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:38.402 Verification LBA range: start 0x0 length 0x1000 00:09:38.402 Nvme1n1 : 10.01 10094.09 78.86 0.00 0.00 12648.88 1511.80 19779.96 00:09:38.402 =================================================================================================================== 00:09:38.402 Total : 10094.09 78.86 0.00 0.00 12648.88 1511.80 19779.96 00:09:38.662 00:21:54 -- target/zcopy.sh@39 -- # perfpid=62580 00:09:38.662 00:21:54 -- target/zcopy.sh@41 -- # xtrace_disable 00:09:38.662 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:09:38.662 00:21:54 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:38.662 00:21:54 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:38.662 00:21:54 -- nvmf/common.sh@520 -- # config=() 00:09:38.662 00:21:54 -- nvmf/common.sh@520 -- # local subsystem config 00:09:38.662 00:21:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:38.662 00:21:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:38.662 { 00:09:38.662 "params": { 00:09:38.662 "name": "Nvme$subsystem", 00:09:38.662 "trtype": "$TEST_TRANSPORT", 00:09:38.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.662 "adrfam": "ipv4", 00:09:38.662 "trsvcid": "$NVMF_PORT", 00:09:38.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.662 "hdgst": ${hdgst:-false}, 00:09:38.662 "ddgst": ${ddgst:-false} 00:09:38.662 }, 00:09:38.662 "method": "bdev_nvme_attach_controller" 00:09:38.662 } 00:09:38.662 EOF 00:09:38.662 )") 00:09:38.662 00:21:54 -- nvmf/common.sh@542 -- # cat 00:09:38.662 [2024-09-29 00:21:54.420549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.420653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 00:21:54 -- nvmf/common.sh@544 -- # jq . 00:09:38.662 00:21:54 -- nvmf/common.sh@545 -- # IFS=, 00:09:38.662 00:21:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:38.662 "params": { 00:09:38.662 "name": "Nvme1", 00:09:38.662 "trtype": "tcp", 00:09:38.662 "traddr": "10.0.0.2", 00:09:38.662 "adrfam": "ipv4", 00:09:38.662 "trsvcid": "4420", 00:09:38.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.662 "hdgst": false, 00:09:38.662 "ddgst": false 00:09:38.662 }, 00:09:38.662 "method": "bdev_nvme_attach_controller" 00:09:38.662 }' 00:09:38.662 [2024-09-29 00:21:54.432499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.432543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 [2024-09-29 00:21:54.440503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.440531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 [2024-09-29 00:21:54.452507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.452565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 [2024-09-29 00:21:54.459210] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:38.662 [2024-09-29 00:21:54.459296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62580 ] 00:09:38.662 [2024-09-29 00:21:54.464513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.464579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 [2024-09-29 00:21:54.476547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.476606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 [2024-09-29 00:21:54.488511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.488569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.662 [2024-09-29 00:21:54.500519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.662 [2024-09-29 00:21:54.500582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.512527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.512569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.524538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.524578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.536528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.536597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.548535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.548589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.560544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.560583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.572544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.572597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.584570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.584613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.592915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.921 [2024-09-29 00:21:54.596548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.596578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.608586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.608632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.620577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.620632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.632604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.632666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.644606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.644648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.649191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.921 [2024-09-29 00:21:54.656600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.921 [2024-09-29 00:21:54.656639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.921 [2024-09-29 00:21:54.668632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.668697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.680605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.680669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.692633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.692679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.704629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.704672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.716654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.716715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.728662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.728708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.740693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.740740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.752686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.752732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.922 [2024-09-29 00:21:54.764699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.922 [2024-09-29 00:21:54.764758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.776698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.776744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 Running I/O for 5 seconds... 00:09:39.181 [2024-09-29 00:21:54.788724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.788768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.805808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.805855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.821912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.821961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.840052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.840101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.855640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.855687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.874247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.874294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.887824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.887871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.903227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.903275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.920705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.920751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.937779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.937826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.954195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.954242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.971428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.971475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:54.988479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:54.988529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:55.003496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.181 [2024-09-29 00:21:55.003543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.181 [2024-09-29 00:21:55.013005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.182 [2024-09-29 00:21:55.013052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.182 [2024-09-29 00:21:55.028319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.182 [2024-09-29 00:21:55.028399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.037837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.037883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.052781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.052827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.069425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.069472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.086670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.086717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.104117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.104165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.120180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.120251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.136755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.136801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.155060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.155107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.169671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.169720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.185651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.185700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.202089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.202136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.218852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.218899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.234987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.235034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.252098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.252145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.267746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.267793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.441 [2024-09-29 00:21:55.285732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.441 [2024-09-29 00:21:55.285793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.700 [2024-09-29 00:21:55.300489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.700 [2024-09-29 00:21:55.300524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.700 [2024-09-29 00:21:55.315832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.700 [2024-09-29 00:21:55.315880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.700 [2024-09-29 00:21:55.333765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.700 [2024-09-29 00:21:55.333812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.700 [2024-09-29 00:21:55.349297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.700 [2024-09-29 00:21:55.349353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.366251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.366297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.382335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.382407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.399931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.399977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.415734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.415780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.433805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.433852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.448461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.448510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.464919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.464965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.480823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.480869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.498990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.499037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.513136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.513183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.522892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.522949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.701 [2024-09-29 00:21:55.538973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.701 [2024-09-29 00:21:55.539057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.553757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.553806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.562224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.562271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.578784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.578862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.595313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.595380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.612874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.612905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.627418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.627450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.644689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.644735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.659902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.659949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.677012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.677100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.692886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.692944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.710641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.710689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.726598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.726647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.744574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.744623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.758629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.758675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.774645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.774692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.960 [2024-09-29 00:21:55.791541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.960 [2024-09-29 00:21:55.791587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.809286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.809344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.823588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.823636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.839118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.839168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.856089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.856136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.871880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.871944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.889521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.889568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.905468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.905522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.924083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.924136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.938583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.938632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.954878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.954925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.971838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.971885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:55.989125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:55.989172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:56.005412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:56.005490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:56.023172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:56.023228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:56.038160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:56.038208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:56.049109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:56.049155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.219 [2024-09-29 00:21:56.065816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.219 [2024-09-29 00:21:56.065865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.080132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.080178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.096822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.096878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.112871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.112919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.130205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.130251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.145878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.145926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.163454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.163500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.179234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.179283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.197020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.197069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.211483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.211530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.227304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.227388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.243912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.479 [2024-09-29 00:21:56.243961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.479 [2024-09-29 00:21:56.259890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.480 [2024-09-29 00:21:56.259937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.480 [2024-09-29 00:21:56.277694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.480 [2024-09-29 00:21:56.277743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.480 [2024-09-29 00:21:56.293195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.480 [2024-09-29 00:21:56.293242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.480 [2024-09-29 00:21:56.311270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.480 [2024-09-29 00:21:56.311362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.480 [2024-09-29 00:21:56.326797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.480 [2024-09-29 00:21:56.326848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.343499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.343609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.361152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.361199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.376616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.376663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.387839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.387885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.404035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.404081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.422200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.422246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.437027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.437073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.447763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.447808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.462660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.462709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.479871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.479917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.496468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.496517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.513723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.513770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.530213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.530259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.547923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.547974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.563000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.563037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.739 [2024-09-29 00:21:56.572898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.739 [2024-09-29 00:21:56.572948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.587615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.587664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.603690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.603753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.620906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.620954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.635285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.635361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.651582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.651633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.668090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.668137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.685256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.685304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.701920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.701967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.718886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.718935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.734452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.998 [2024-09-29 00:21:56.734508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.998 [2024-09-29 00:21:56.752786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.999 [2024-09-29 00:21:56.752833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.999 [2024-09-29 00:21:56.768573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.999 [2024-09-29 00:21:56.768634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.999 [2024-09-29 00:21:56.786770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.999 [2024-09-29 00:21:56.786840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.999 [2024-09-29 00:21:56.802553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.999 [2024-09-29 00:21:56.802600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.999 [2024-09-29 00:21:56.820416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.999 [2024-09-29 00:21:56.820451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.999 [2024-09-29 00:21:56.836254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.999 [2024-09-29 00:21:56.836304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.852167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.852239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.870043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.870091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.886410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.886459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.902315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.902388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.920022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.920068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.935550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.935599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.952917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.952968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.969144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.969191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:56.987615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:56.987664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.002195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.002242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.011189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.011237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.026649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.026698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.043441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.043487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.060964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.061010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.076320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.076382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.087532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.087580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.258 [2024-09-29 00:21:57.103917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.258 [2024-09-29 00:21:57.103967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.119531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.119578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.138310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.138386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.152306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.152368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.167579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.167626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.178768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.178815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.195653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.195703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.211284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.211359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.229620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.229668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.243620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.243667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.258331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.258403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.269744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.269791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.285642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.285690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.303014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.303062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.319438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.319486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.336573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.336633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.517 [2024-09-29 00:21:57.353107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.517 [2024-09-29 00:21:57.353154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.370594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.370645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.386052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.386101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.404083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.404132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.419876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.419924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.437217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.437267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.453395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.453454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.470783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.470830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.485781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.485828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.501131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.501179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.518612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.518660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.535290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.535363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.552090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.552137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.568567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.568617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.583797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.583847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.593401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.593460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.608324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.608383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.776 [2024-09-29 00:21:57.619479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.776 [2024-09-29 00:21:57.619525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.635600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.635633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.652569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.652615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.666826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.666873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.682304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.682379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.692122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.692169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.706801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.706849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.721711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.721758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.738266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.738312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.754106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.754154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.771893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.771940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.787843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.787890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.805429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.805475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.822097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.043 [2024-09-29 00:21:57.822145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.043 [2024-09-29 00:21:57.839084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.044 [2024-09-29 00:21:57.839131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.044 [2024-09-29 00:21:57.854499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.044 [2024-09-29 00:21:57.854546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.044 [2024-09-29 00:21:57.872476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.044 [2024-09-29 00:21:57.872526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.887707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.887770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.905102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.905160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.921234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.921281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.938599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.938655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.953613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.953673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.970821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.970868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:57.985753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:57.985800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:58.002232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:58.002279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:58.017940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.322 [2024-09-29 00:21:58.017988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.322 [2024-09-29 00:21:58.036545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.036644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.051149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.051213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.061613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.061665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.076762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.076812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.094018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.094070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.110626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.110673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.126152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.126214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.143876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.143922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.323 [2024-09-29 00:21:58.159310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.323 [2024-09-29 00:21:58.159384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.177498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.177544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.192811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.192857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.203603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.203651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.219860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.219909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.236711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.236758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.254595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.254643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.270934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.270997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.286732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.286780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.304295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.304368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.319720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.319767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.337421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.337468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.352631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.352678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.363842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.363889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.379492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.379549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.396596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.396657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.582 [2024-09-29 00:21:58.413329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.582 [2024-09-29 00:21:58.413404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.430936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.431002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.445969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.446015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.462294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.462389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.478600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.478647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.495039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.495086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.512960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.513025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.527454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.527516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.841 [2024-09-29 00:21:58.543427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.841 [2024-09-29 00:21:58.543477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.561360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.561427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.576263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.576312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.587410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.587476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.602834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.602885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.621325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.621399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.636260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.636310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.653886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.653932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.671018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.671050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.842 [2024-09-29 00:21:58.687587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.842 [2024-09-29 00:21:58.687623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.703018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.703068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.721779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.721862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.736909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.736960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.747946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.747998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.762310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.762369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.778516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.778562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.795717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.795763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.812409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.812458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.830141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.830188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.846191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.846239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.862026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.862072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.878827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.878873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.895501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.895546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.912701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.912745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.929718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.929764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.101 [2024-09-29 00:21:58.946696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.101 [2024-09-29 00:21:58.946743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:58.962205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:58.962253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:58.973820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:58.973867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:58.989981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:58.990036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:59.006819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:59.006866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:59.022970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:59.023016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:59.039312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:59.039386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:59.058222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:59.058271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:59.072471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:59.072528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.360 [2024-09-29 00:21:59.087662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.360 [2024-09-29 00:21:59.087708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.104861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.104907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.122302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.122374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.137020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.137069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.146604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.146689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.162563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.162617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.179863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.179909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.361 [2024-09-29 00:21:59.194666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.361 [2024-09-29 00:21:59.194714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.211742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.211815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.226442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.226494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.242685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.242746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.259656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.259704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.276887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.276934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.293336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.293409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.310236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.310283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.326683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.326745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.344960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.345008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.360427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.360476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.371892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.371938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.387309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.387382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.404788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.404834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.421823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.421869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.437233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.437280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.620 [2024-09-29 00:21:59.454854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.620 [2024-09-29 00:21:59.454900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-09-29 00:21:59.471358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-09-29 00:21:59.471434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.878 [2024-09-29 00:21:59.487111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.878 [2024-09-29 00:21:59.487158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.504967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.505014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.519709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.519770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.536059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.536106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.551179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.551227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.560358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.560422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.576821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.576869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.587986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.588032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.604158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.604214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.621090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.621127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.631448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.631494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.645057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.645104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.660179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.660250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.678594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.678644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.693234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.693264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.710310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.710386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.879 [2024-09-29 00:21:59.726350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.879 [2024-09-29 00:21:59.726422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.744622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.744668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.758591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.758638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.773047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.773094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.789506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.789553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 00:09:44.138 Latency(us) 00:09:44.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.138 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:44.138 Nvme1n1 : 5.01 13019.00 101.71 0.00 0.00 9819.57 4051.32 20614.05 00:09:44.138 =================================================================================================================== 00:09:44.138 Total : 13019.00 101.71 0.00 0.00 9819.57 4051.32 20614.05 00:09:44.138 [2024-09-29 00:21:59.800441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.800477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.812426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.812458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.824457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.824501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.836465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.836523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.848467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.848524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.860489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.860544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.872489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.872546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.884493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.884542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.896471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.896513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.908478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.908524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.920501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.920553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.932486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.932531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.944482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.944524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.956513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.956582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.968489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.968531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.138 [2024-09-29 00:21:59.980498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.138 [2024-09-29 00:21:59.980541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.397 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62580) - No such process 00:09:44.397 00:21:59 -- target/zcopy.sh@49 -- # wait 62580 00:09:44.397 00:21:59 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.397 00:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.397 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:09:44.397 00:21:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:44.397 00:21:59 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:44.397 00:21:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.397 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:09:44.397 delay0 00:09:44.397 00:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:44.397 00:22:00 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:44.397 00:22:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:44.397 00:22:00 -- common/autotest_common.sh@10 -- # set +x 00:09:44.397 00:22:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:44.397 00:22:00 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:44.397 [2024-09-29 00:22:00.173453] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:50.962 Initializing NVMe Controllers 00:09:50.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:50.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:50.962 Initialization complete. Launching workers. 00:09:50.962 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 120 00:09:50.962 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 407, failed to submit 33 00:09:50.962 success 289, unsuccess 118, failed 0 00:09:50.962 00:22:06 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:50.962 00:22:06 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:50.962 00:22:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:50.962 00:22:06 -- nvmf/common.sh@116 -- # sync 00:09:50.962 00:22:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:50.962 00:22:06 -- nvmf/common.sh@119 -- # set +e 00:09:50.962 00:22:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:50.962 00:22:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:50.962 rmmod nvme_tcp 00:09:50.962 rmmod nvme_fabrics 00:09:50.962 rmmod nvme_keyring 00:09:50.962 00:22:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:50.962 00:22:06 -- nvmf/common.sh@123 -- # set -e 00:09:50.962 00:22:06 -- nvmf/common.sh@124 -- # return 0 00:09:50.962 00:22:06 -- nvmf/common.sh@477 -- # '[' -n 62430 ']' 00:09:50.962 00:22:06 -- nvmf/common.sh@478 -- # killprocess 62430 00:09:50.962 00:22:06 -- common/autotest_common.sh@926 -- # '[' -z 62430 ']' 00:09:50.962 00:22:06 -- common/autotest_common.sh@930 -- # kill -0 62430 00:09:50.962 00:22:06 -- common/autotest_common.sh@931 -- # uname 00:09:50.962 00:22:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:50.962 00:22:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62430 00:09:50.962 00:22:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:50.962 00:22:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:50.962 00:22:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62430' 00:09:50.962 killing process with pid 62430 00:09:50.962 00:22:06 -- common/autotest_common.sh@945 -- # kill 62430 00:09:50.962 00:22:06 -- common/autotest_common.sh@950 -- # wait 62430 00:09:50.962 00:22:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:50.962 00:22:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:50.962 00:22:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:50.962 00:22:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.962 00:22:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:50.962 00:22:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.962 00:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.962 00:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.962 00:22:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:50.962 00:09:50.962 real 0m24.184s 00:09:50.962 user 0m39.794s 00:09:50.962 sys 0m6.563s 00:09:50.962 00:22:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.962 00:22:06 -- common/autotest_common.sh@10 -- # set +x 00:09:50.962 ************************************ 00:09:50.962 END TEST nvmf_zcopy 00:09:50.962 ************************************ 00:09:50.962 00:22:06 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.962 00:22:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:50.962 00:22:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.962 00:22:06 -- common/autotest_common.sh@10 -- # set +x 00:09:50.962 ************************************ 00:09:50.962 START TEST nvmf_nmic 00:09:50.962 ************************************ 00:09:50.962 00:22:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:50.962 * Looking for test storage... 00:09:50.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.962 00:22:06 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.962 00:22:06 -- nvmf/common.sh@7 -- # uname -s 00:09:50.962 00:22:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.962 00:22:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.962 00:22:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.962 00:22:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.962 00:22:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.962 00:22:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.962 00:22:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.962 00:22:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.962 00:22:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.962 00:22:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.962 00:22:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:50.962 00:22:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:50.962 00:22:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.962 00:22:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.962 00:22:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.962 00:22:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.962 00:22:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.962 00:22:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.962 00:22:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.962 00:22:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.962 00:22:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.962 00:22:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.962 00:22:06 -- paths/export.sh@5 -- # export PATH 00:09:50.962 00:22:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.962 00:22:06 -- nvmf/common.sh@46 -- # : 0 00:09:50.963 00:22:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:50.963 00:22:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:50.963 00:22:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:50.963 00:22:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.963 00:22:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.963 00:22:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:50.963 00:22:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:50.963 00:22:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:50.963 00:22:06 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.963 00:22:06 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.963 00:22:06 -- target/nmic.sh@14 -- # nvmftestinit 00:09:50.963 00:22:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:50.963 00:22:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.963 00:22:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:50.963 00:22:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:50.963 00:22:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:50.963 00:22:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.963 00:22:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.963 00:22:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.963 00:22:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:50.963 00:22:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:50.963 00:22:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:50.963 00:22:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:50.963 00:22:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:50.963 00:22:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:50.963 00:22:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.963 00:22:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.963 00:22:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:50.963 00:22:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:50.963 00:22:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.963 00:22:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.963 00:22:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.963 00:22:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.963 00:22:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.963 00:22:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.963 00:22:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.963 00:22:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.963 00:22:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:50.963 00:22:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:50.963 Cannot find device "nvmf_tgt_br" 00:09:50.963 00:22:06 -- nvmf/common.sh@154 -- # true 00:09:50.963 00:22:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.963 Cannot find device "nvmf_tgt_br2" 00:09:50.963 00:22:06 -- nvmf/common.sh@155 -- # true 00:09:50.963 00:22:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:50.963 00:22:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:50.963 Cannot find device "nvmf_tgt_br" 00:09:50.963 00:22:06 -- nvmf/common.sh@157 -- # true 00:09:50.963 00:22:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:50.963 Cannot find device "nvmf_tgt_br2" 00:09:50.963 00:22:06 -- nvmf/common.sh@158 -- # true 00:09:50.963 00:22:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:51.222 00:22:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:51.222 00:22:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:51.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.222 00:22:06 -- nvmf/common.sh@161 -- # true 00:09:51.222 00:22:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:51.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:51.222 00:22:06 -- nvmf/common.sh@162 -- # true 00:09:51.222 00:22:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:51.222 00:22:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:51.222 00:22:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:51.222 00:22:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:51.222 00:22:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.222 00:22:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.222 00:22:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.222 00:22:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:51.222 00:22:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:51.222 00:22:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:51.222 00:22:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:51.222 00:22:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:51.222 00:22:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:51.222 00:22:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.222 00:22:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.222 00:22:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.222 00:22:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:51.222 00:22:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:51.222 00:22:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.222 00:22:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.222 00:22:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.222 00:22:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.222 00:22:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.222 00:22:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:51.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:09:51.222 00:09:51.222 --- 10.0.0.2 ping statistics --- 00:09:51.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.222 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:51.222 00:22:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:51.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:09:51.222 00:09:51.222 --- 10.0.0.3 ping statistics --- 00:09:51.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.222 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:51.222 00:22:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:51.222 00:09:51.222 --- 10.0.0.1 ping statistics --- 00:09:51.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.222 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:51.222 00:22:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.222 00:22:07 -- nvmf/common.sh@421 -- # return 0 00:09:51.222 00:22:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:51.222 00:22:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.222 00:22:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:51.222 00:22:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:51.222 00:22:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.222 00:22:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:51.222 00:22:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:51.481 00:22:07 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:51.481 00:22:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:51.481 00:22:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:51.481 00:22:07 -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 00:22:07 -- nvmf/common.sh@469 -- # nvmfpid=62901 00:09:51.481 00:22:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.481 00:22:07 -- nvmf/common.sh@470 -- # waitforlisten 62901 00:09:51.481 00:22:07 -- common/autotest_common.sh@819 -- # '[' -z 62901 ']' 00:09:51.481 00:22:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.481 00:22:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:51.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.481 00:22:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.481 00:22:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:51.481 00:22:07 -- common/autotest_common.sh@10 -- # set +x 00:09:51.481 [2024-09-29 00:22:07.136561] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:51.481 [2024-09-29 00:22:07.136665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.481 [2024-09-29 00:22:07.278175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.740 [2024-09-29 00:22:07.349813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:51.740 [2024-09-29 00:22:07.349989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.740 [2024-09-29 00:22:07.350005] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.740 [2024-09-29 00:22:07.350016] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.740 [2024-09-29 00:22:07.350179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.740 [2024-09-29 00:22:07.351063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.740 [2024-09-29 00:22:07.351267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.740 [2024-09-29 00:22:07.351275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.308 00:22:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.308 00:22:08 -- common/autotest_common.sh@852 -- # return 0 00:09:52.308 00:22:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:52.308 00:22:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:52.308 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 00:22:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.567 00:22:08 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 [2024-09-29 00:22:08.184138] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 Malloc0 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 [2024-09-29 00:22:08.238775] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 test case1: single bdev can't be used in multiple subsystems 00:09:52.567 00:22:08 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:52.567 00:22:08 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@28 -- # nmic_status=0 00:09:52.567 00:22:08 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 [2024-09-29 00:22:08.262633] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:52.567 [2024-09-29 00:22:08.262670] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:52.567 [2024-09-29 00:22:08.262681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.567 request: 00:09:52.567 { 00:09:52.567 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:52.567 "namespace": { 00:09:52.567 "bdev_name": "Malloc0" 00:09:52.567 }, 00:09:52.567 "method": "nvmf_subsystem_add_ns", 00:09:52.567 "req_id": 1 00:09:52.567 } 00:09:52.567 Got JSON-RPC error response 00:09:52.567 response: 00:09:52.567 { 00:09:52.567 "code": -32602, 00:09:52.567 "message": "Invalid parameters" 00:09:52.567 } 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@29 -- # nmic_status=1 00:09:52.567 00:22:08 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:52.567 Adding namespace failed - expected result. 00:09:52.567 00:22:08 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:52.567 test case2: host connect to nvmf target in multiple paths 00:09:52.567 00:22:08 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:52.567 00:22:08 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:52.567 00:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:52.567 00:22:08 -- common/autotest_common.sh@10 -- # set +x 00:09:52.567 [2024-09-29 00:22:08.274761] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:52.567 00:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:52.567 00:22:08 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.567 00:22:08 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:52.826 00:22:08 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.826 00:22:08 -- common/autotest_common.sh@1177 -- # local i=0 00:09:52.826 00:22:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.826 00:22:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:09:52.826 00:22:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:09:54.728 00:22:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:09:54.728 00:22:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:09:54.728 00:22:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.728 00:22:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:09:54.728 00:22:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.728 00:22:10 -- common/autotest_common.sh@1187 -- # return 0 00:09:54.728 00:22:10 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:54.728 [global] 00:09:54.728 thread=1 00:09:54.728 invalidate=1 00:09:54.728 rw=write 00:09:54.728 time_based=1 00:09:54.728 runtime=1 00:09:54.728 ioengine=libaio 00:09:54.728 direct=1 00:09:54.728 bs=4096 00:09:54.728 iodepth=1 00:09:54.728 norandommap=0 00:09:54.728 numjobs=1 00:09:54.728 00:09:54.728 verify_dump=1 00:09:54.728 verify_backlog=512 00:09:54.728 verify_state_save=0 00:09:54.728 do_verify=1 00:09:54.728 verify=crc32c-intel 00:09:54.986 [job0] 00:09:54.986 filename=/dev/nvme0n1 00:09:54.986 Could not set queue depth (nvme0n1) 00:09:54.986 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:54.986 fio-3.35 00:09:54.986 Starting 1 thread 00:09:56.388 00:09:56.388 job0: (groupid=0, jobs=1): err= 0: pid=62992: Sun Sep 29 00:22:11 2024 00:09:56.388 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:56.388 slat (usec): min=11, max=134, avg=16.63, stdev= 6.88 00:09:56.388 clat (usec): min=125, max=594, avg=175.21, stdev=26.54 00:09:56.388 lat (usec): min=139, max=610, avg=191.84, stdev=28.68 00:09:56.389 clat percentiles (usec): 00:09:56.389 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:09:56.389 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:09:56.389 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 221], 00:09:56.389 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 314], 99.95th=[ 412], 00:09:56.389 | 99.99th=[ 594] 00:09:56.389 write: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:09:56.389 slat (usec): min=13, max=113, avg=21.82, stdev= 7.19 00:09:56.389 clat (usec): min=78, max=273, avg=106.70, stdev=18.00 00:09:56.389 lat (usec): min=95, max=386, avg=128.52, stdev=20.67 00:09:56.389 clat percentiles (usec): 00:09:56.389 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 93], 00:09:56.389 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 106], 00:09:56.389 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 133], 95.00th=[ 141], 00:09:56.389 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 219], 99.95th=[ 231], 00:09:56.389 | 99.99th=[ 273] 00:09:56.389 bw ( KiB/s): min=12440, max=12440, per=100.00%, avg=12440.00, stdev= 0.00, samples=1 00:09:56.389 iops : min= 3110, max= 3110, avg=3110.00, stdev= 0.00, samples=1 00:09:56.389 lat (usec) : 100=22.02%, 250=77.56%, 500=0.40%, 750=0.02% 00:09:56.389 cpu : usr=2.90%, sys=8.80%, ctx=6182, majf=0, minf=5 00:09:56.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:56.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:56.389 issued rwts: total=3072,3109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:56.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:56.389 00:09:56.389 Run status group 0 (all jobs): 00:09:56.389 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:56.389 WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:09:56.389 00:09:56.389 Disk stats (read/write): 00:09:56.389 nvme0n1: ios=2628/3072, merge=0/0, ticks=502/388, in_queue=890, util=91.28% 00:09:56.389 00:22:11 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:56.389 00:22:11 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.389 00:22:11 -- common/autotest_common.sh@1198 -- # local i=0 00:09:56.389 00:22:11 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:09:56.389 00:22:11 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.389 00:22:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.389 00:22:11 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:09:56.389 00:22:11 -- common/autotest_common.sh@1210 -- # return 0 00:09:56.389 00:22:11 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:56.389 00:22:11 -- target/nmic.sh@53 -- # nvmftestfini 00:09:56.389 00:22:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:56.389 00:22:11 -- nvmf/common.sh@116 -- # sync 00:09:56.389 00:22:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:56.389 00:22:11 -- nvmf/common.sh@119 -- # set +e 00:09:56.389 00:22:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:56.389 00:22:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:56.389 rmmod nvme_tcp 00:09:56.389 rmmod nvme_fabrics 00:09:56.389 rmmod nvme_keyring 00:09:56.389 00:22:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:56.389 00:22:12 -- nvmf/common.sh@123 -- # set -e 00:09:56.389 00:22:12 -- nvmf/common.sh@124 -- # return 0 00:09:56.389 00:22:12 -- nvmf/common.sh@477 -- # '[' -n 62901 ']' 00:09:56.389 00:22:12 -- nvmf/common.sh@478 -- # killprocess 62901 00:09:56.389 00:22:12 -- common/autotest_common.sh@926 -- # '[' -z 62901 ']' 00:09:56.389 00:22:12 -- common/autotest_common.sh@930 -- # kill -0 62901 00:09:56.389 00:22:12 -- common/autotest_common.sh@931 -- # uname 00:09:56.389 00:22:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.389 00:22:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62901 00:09:56.389 00:22:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:56.389 00:22:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:56.389 00:22:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62901' 00:09:56.389 killing process with pid 62901 00:09:56.389 00:22:12 -- common/autotest_common.sh@945 -- # kill 62901 00:09:56.389 00:22:12 -- common/autotest_common.sh@950 -- # wait 62901 00:09:56.649 00:22:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:56.649 00:22:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:56.649 00:22:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:56.649 00:22:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.649 00:22:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.649 00:22:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.649 00:22:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:56.649 00:09:56.649 real 0m5.652s 00:09:56.649 user 0m18.310s 00:09:56.649 sys 0m2.220s 00:09:56.649 00:22:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.649 00:22:12 -- common/autotest_common.sh@10 -- # set +x 00:09:56.649 ************************************ 00:09:56.649 END TEST nvmf_nmic 00:09:56.649 ************************************ 00:09:56.649 00:22:12 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:56.649 00:22:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:56.649 00:22:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.649 00:22:12 -- common/autotest_common.sh@10 -- # set +x 00:09:56.649 ************************************ 00:09:56.649 START TEST nvmf_fio_target 00:09:56.649 ************************************ 00:09:56.649 00:22:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:56.649 * Looking for test storage... 00:09:56.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.649 00:22:12 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.649 00:22:12 -- nvmf/common.sh@7 -- # uname -s 00:09:56.649 00:22:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.649 00:22:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.649 00:22:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.649 00:22:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.649 00:22:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.649 00:22:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.649 00:22:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.649 00:22:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.649 00:22:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.649 00:22:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:56.649 00:22:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:09:56.649 00:22:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.649 00:22:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.649 00:22:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.649 00:22:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.649 00:22:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.649 00:22:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.649 00:22:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.649 00:22:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.649 00:22:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.649 00:22:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.649 00:22:12 -- paths/export.sh@5 -- # export PATH 00:09:56.649 00:22:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.649 00:22:12 -- nvmf/common.sh@46 -- # : 0 00:09:56.649 00:22:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:56.649 00:22:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:56.649 00:22:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:56.649 00:22:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.649 00:22:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.649 00:22:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:56.649 00:22:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:56.649 00:22:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:56.649 00:22:12 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.649 00:22:12 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.649 00:22:12 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.649 00:22:12 -- target/fio.sh@16 -- # nvmftestinit 00:09:56.649 00:22:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:56.649 00:22:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.649 00:22:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:56.649 00:22:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:56.649 00:22:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:56.649 00:22:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.649 00:22:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.649 00:22:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.649 00:22:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:56.649 00:22:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:56.649 00:22:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.649 00:22:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.649 00:22:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.649 00:22:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:56.649 00:22:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.649 00:22:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.649 00:22:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.649 00:22:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.649 00:22:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.649 00:22:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.649 00:22:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.649 00:22:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.649 00:22:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:56.649 00:22:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:56.649 Cannot find device "nvmf_tgt_br" 00:09:56.649 00:22:12 -- nvmf/common.sh@154 -- # true 00:09:56.649 00:22:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.649 Cannot find device "nvmf_tgt_br2" 00:09:56.649 00:22:12 -- nvmf/common.sh@155 -- # true 00:09:56.649 00:22:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:56.649 00:22:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:56.908 Cannot find device "nvmf_tgt_br" 00:09:56.908 00:22:12 -- nvmf/common.sh@157 -- # true 00:09:56.908 00:22:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:56.908 Cannot find device "nvmf_tgt_br2" 00:09:56.908 00:22:12 -- nvmf/common.sh@158 -- # true 00:09:56.908 00:22:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:56.908 00:22:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:56.908 00:22:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.908 00:22:12 -- nvmf/common.sh@161 -- # true 00:09:56.908 00:22:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.908 00:22:12 -- nvmf/common.sh@162 -- # true 00:09:56.908 00:22:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.908 00:22:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.908 00:22:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.909 00:22:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.909 00:22:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.909 00:22:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.909 00:22:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.909 00:22:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.909 00:22:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.909 00:22:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:56.909 00:22:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:56.909 00:22:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:56.909 00:22:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:56.909 00:22:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.909 00:22:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.909 00:22:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.909 00:22:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:56.909 00:22:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:56.909 00:22:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.909 00:22:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.909 00:22:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.909 00:22:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.167 00:22:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.167 00:22:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:57.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:09:57.167 00:09:57.167 --- 10.0.0.2 ping statistics --- 00:09:57.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.167 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:57.167 00:22:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:57.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:57.167 00:09:57.167 --- 10.0.0.3 ping statistics --- 00:09:57.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.167 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:57.167 00:22:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:57.167 00:09:57.167 --- 10.0.0.1 ping statistics --- 00:09:57.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.167 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:57.167 00:22:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.167 00:22:12 -- nvmf/common.sh@421 -- # return 0 00:09:57.167 00:22:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:57.167 00:22:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.167 00:22:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:57.167 00:22:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:57.167 00:22:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.167 00:22:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:57.167 00:22:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:57.168 00:22:12 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:57.168 00:22:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:57.168 00:22:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:57.168 00:22:12 -- common/autotest_common.sh@10 -- # set +x 00:09:57.168 00:22:12 -- nvmf/common.sh@469 -- # nvmfpid=63169 00:09:57.168 00:22:12 -- nvmf/common.sh@470 -- # waitforlisten 63169 00:09:57.168 00:22:12 -- common/autotest_common.sh@819 -- # '[' -z 63169 ']' 00:09:57.168 00:22:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.168 00:22:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:57.168 00:22:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.168 00:22:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.168 00:22:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:57.168 00:22:12 -- common/autotest_common.sh@10 -- # set +x 00:09:57.168 [2024-09-29 00:22:12.866423] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:09:57.168 [2024-09-29 00:22:12.866512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.168 [2024-09-29 00:22:13.005684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.426 [2024-09-29 00:22:13.055757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.426 [2024-09-29 00:22:13.055897] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.426 [2024-09-29 00:22:13.055909] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.426 [2024-09-29 00:22:13.055916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.426 [2024-09-29 00:22:13.056068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.426 [2024-09-29 00:22:13.056170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.426 [2024-09-29 00:22:13.057205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.426 [2024-09-29 00:22:13.057249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.364 00:22:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:58.364 00:22:13 -- common/autotest_common.sh@852 -- # return 0 00:09:58.364 00:22:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:58.364 00:22:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:58.364 00:22:13 -- common/autotest_common.sh@10 -- # set +x 00:09:58.364 00:22:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.364 00:22:13 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:58.364 [2024-09-29 00:22:14.106116] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.364 00:22:14 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.623 00:22:14 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:58.623 00:22:14 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:58.882 00:22:14 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:58.882 00:22:14 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.140 00:22:14 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:59.140 00:22:14 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.399 00:22:15 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:59.399 00:22:15 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:59.658 00:22:15 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:59.917 00:22:15 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:59.917 00:22:15 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.176 00:22:15 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:00.176 00:22:15 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.439 00:22:16 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:00.439 00:22:16 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:00.701 00:22:16 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.959 00:22:16 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:00.959 00:22:16 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.218 00:22:16 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:01.218 00:22:16 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:01.476 00:22:17 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.735 [2024-09-29 00:22:17.421086] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.735 00:22:17 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:01.994 00:22:17 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:02.252 00:22:17 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.252 00:22:18 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:02.252 00:22:18 -- common/autotest_common.sh@1177 -- # local i=0 00:10:02.252 00:22:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.252 00:22:18 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:10:02.252 00:22:18 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:10:02.252 00:22:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:04.781 00:22:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:04.781 00:22:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.781 00:22:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:04.781 00:22:20 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:10:04.781 00:22:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.781 00:22:20 -- common/autotest_common.sh@1187 -- # return 0 00:10:04.781 00:22:20 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:04.781 [global] 00:10:04.781 thread=1 00:10:04.781 invalidate=1 00:10:04.781 rw=write 00:10:04.781 time_based=1 00:10:04.781 runtime=1 00:10:04.781 ioengine=libaio 00:10:04.781 direct=1 00:10:04.781 bs=4096 00:10:04.781 iodepth=1 00:10:04.781 norandommap=0 00:10:04.781 numjobs=1 00:10:04.781 00:10:04.781 verify_dump=1 00:10:04.781 verify_backlog=512 00:10:04.781 verify_state_save=0 00:10:04.781 do_verify=1 00:10:04.781 verify=crc32c-intel 00:10:04.781 [job0] 00:10:04.781 filename=/dev/nvme0n1 00:10:04.781 [job1] 00:10:04.781 filename=/dev/nvme0n2 00:10:04.781 [job2] 00:10:04.781 filename=/dev/nvme0n3 00:10:04.781 [job3] 00:10:04.781 filename=/dev/nvme0n4 00:10:04.781 Could not set queue depth (nvme0n1) 00:10:04.781 Could not set queue depth (nvme0n2) 00:10:04.781 Could not set queue depth (nvme0n3) 00:10:04.781 Could not set queue depth (nvme0n4) 00:10:04.781 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.781 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.781 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.781 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.781 fio-3.35 00:10:04.781 Starting 4 threads 00:10:05.718 00:10:05.718 job0: (groupid=0, jobs=1): err= 0: pid=63359: Sun Sep 29 00:22:21 2024 00:10:05.718 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:05.718 slat (nsec): min=10654, max=45885, avg=12790.84, stdev=2735.36 00:10:05.718 clat (usec): min=129, max=609, avg=161.50, stdev=20.76 00:10:05.718 lat (usec): min=140, max=623, avg=174.29, stdev=21.08 00:10:05.718 clat percentiles (usec): 00:10:05.718 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:05.718 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:10:05.718 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 188], 00:10:05.718 | 99.00th=[ 208], 99.50th=[ 229], 99.90th=[ 396], 99.95th=[ 537], 00:10:05.718 | 99.99th=[ 611] 00:10:05.718 write: IOPS=3176, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:10:05.718 slat (usec): min=12, max=107, avg=18.92, stdev= 4.40 00:10:05.718 clat (usec): min=89, max=1667, avg=124.26, stdev=36.32 00:10:05.718 lat (usec): min=107, max=1685, avg=143.18, stdev=36.80 00:10:05.718 clat percentiles (usec): 00:10:05.718 | 1.00th=[ 96], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 113], 00:10:05.718 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:10:05.718 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 149], 00:10:05.718 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 269], 99.95th=[ 1188], 00:10:05.718 | 99.99th=[ 1663] 00:10:05.718 bw ( KiB/s): min=12576, max=12576, per=25.59%, avg=12576.00, stdev= 0.00, samples=1 00:10:05.718 iops : min= 3144, max= 3144, avg=3144.00, stdev= 0.00, samples=1 00:10:05.718 lat (usec) : 100=1.26%, 250=98.48%, 500=0.19%, 750=0.03% 00:10:05.718 lat (msec) : 2=0.03% 00:10:05.718 cpu : usr=2.10%, sys=7.80%, ctx=6252, majf=0, minf=5 00:10:05.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 issued rwts: total=3072,3180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.719 job1: (groupid=0, jobs=1): err= 0: pid=63360: Sun Sep 29 00:22:21 2024 00:10:05.719 read: IOPS=2847, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:10:05.719 slat (nsec): min=12014, max=47024, avg=14695.04, stdev=3770.25 00:10:05.719 clat (usec): min=133, max=492, avg=169.87, stdev=15.84 00:10:05.719 lat (usec): min=146, max=505, avg=184.57, stdev=16.14 00:10:05.719 clat percentiles (usec): 00:10:05.719 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:10:05.719 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:10:05.719 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:10:05.719 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 241], 99.95th=[ 247], 00:10:05.719 | 99.99th=[ 494] 00:10:05.719 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:05.719 slat (usec): min=14, max=117, avg=21.46, stdev= 5.28 00:10:05.719 clat (usec): min=95, max=1590, avg=129.63, stdev=31.08 00:10:05.719 lat (usec): min=114, max=1610, avg=151.09, stdev=31.69 00:10:05.719 clat percentiles (usec): 00:10:05.719 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:10:05.719 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:10:05.719 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:10:05.719 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 217], 99.95th=[ 594], 00:10:05.719 | 99.99th=[ 1598] 00:10:05.719 bw ( KiB/s): min=12288, max=12288, per=25.01%, avg=12288.00, stdev= 0.00, samples=1 00:10:05.719 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:05.719 lat (usec) : 100=0.22%, 250=99.71%, 500=0.03%, 750=0.02% 00:10:05.719 lat (msec) : 2=0.02% 00:10:05.719 cpu : usr=2.50%, sys=8.10%, ctx=5922, majf=0, minf=10 00:10:05.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 issued rwts: total=2850,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.719 job2: (groupid=0, jobs=1): err= 0: pid=63361: Sun Sep 29 00:22:21 2024 00:10:05.719 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:05.719 slat (nsec): min=11630, max=46509, avg=14212.07, stdev=3267.41 00:10:05.719 clat (usec): min=142, max=1566, avg=180.31, stdev=31.45 00:10:05.719 lat (usec): min=155, max=1579, avg=194.52, stdev=31.66 00:10:05.719 clat percentiles (usec): 00:10:05.719 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:10:05.719 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:10:05.719 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:10:05.719 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 247], 99.95th=[ 420], 00:10:05.719 | 99.99th=[ 1565] 00:10:05.719 write: IOPS=3046, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:10:05.719 slat (usec): min=13, max=117, avg=20.92, stdev= 4.98 00:10:05.719 clat (usec): min=105, max=534, avg=140.88, stdev=15.70 00:10:05.719 lat (usec): min=123, max=553, avg=161.80, stdev=16.64 00:10:05.719 clat percentiles (usec): 00:10:05.719 | 1.00th=[ 116], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:10:05.719 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:10:05.719 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:10:05.719 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 262], 00:10:05.719 | 99.99th=[ 537] 00:10:05.719 bw ( KiB/s): min=12288, max=12288, per=25.01%, avg=12288.00, stdev= 0.00, samples=1 00:10:05.719 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:05.719 lat (usec) : 250=99.93%, 500=0.04%, 750=0.02% 00:10:05.719 lat (msec) : 2=0.02% 00:10:05.719 cpu : usr=1.80%, sys=8.10%, ctx=5610, majf=0, minf=15 00:10:05.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 issued rwts: total=2560,3050,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.719 job3: (groupid=0, jobs=1): err= 0: pid=63362: Sun Sep 29 00:22:21 2024 00:10:05.719 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:05.719 slat (nsec): min=11388, max=43991, avg=13869.52, stdev=2977.30 00:10:05.719 clat (usec): min=142, max=2646, avg=183.01, stdev=55.51 00:10:05.719 lat (usec): min=155, max=2672, avg=196.88, stdev=55.84 00:10:05.719 clat percentiles (usec): 00:10:05.719 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:05.719 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:05.719 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 212], 00:10:05.719 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 644], 99.95th=[ 1020], 00:10:05.719 | 99.99th=[ 2638] 00:10:05.719 write: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:10:05.719 slat (nsec): min=17348, max=96450, avg=20771.03, stdev=4492.46 00:10:05.719 clat (usec): min=107, max=284, avg=142.07, stdev=14.03 00:10:05.719 lat (usec): min=126, max=381, avg=162.84, stdev=15.11 00:10:05.719 clat percentiles (usec): 00:10:05.719 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:10:05.719 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:10:05.719 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 169], 00:10:05.719 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 204], 99.95th=[ 212], 00:10:05.719 | 99.99th=[ 285] 00:10:05.719 bw ( KiB/s): min=12288, max=12288, per=25.01%, avg=12288.00, stdev= 0.00, samples=1 00:10:05.719 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:05.719 lat (usec) : 250=99.87%, 500=0.05%, 750=0.04% 00:10:05.719 lat (msec) : 2=0.02%, 4=0.02% 00:10:05.719 cpu : usr=2.90%, sys=6.70%, ctx=5554, majf=0, minf=9 00:10:05.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:05.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.719 issued rwts: total=2560,2994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:05.719 00:10:05.719 Run status group 0 (all jobs): 00:10:05.719 READ: bw=43.1MiB/s (45.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=43.1MiB (45.2MB), run=1001-1001msec 00:10:05.719 WRITE: bw=48.0MiB/s (50.3MB/s), 11.7MiB/s-12.4MiB/s (12.3MB/s-13.0MB/s), io=48.0MiB (50.4MB), run=1001-1001msec 00:10:05.719 00:10:05.719 Disk stats (read/write): 00:10:05.719 nvme0n1: ios=2610/2842, merge=0/0, ticks=424/373, in_queue=797, util=87.37% 00:10:05.719 nvme0n2: ios=2545/2560, merge=0/0, ticks=477/351, in_queue=828, util=88.74% 00:10:05.719 nvme0n3: ios=2248/2560, merge=0/0, ticks=414/379, in_queue=793, util=89.12% 00:10:05.719 nvme0n4: ios=2204/2560, merge=0/0, ticks=409/391, in_queue=800, util=89.57% 00:10:05.719 00:22:21 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:05.719 [global] 00:10:05.719 thread=1 00:10:05.719 invalidate=1 00:10:05.719 rw=randwrite 00:10:05.719 time_based=1 00:10:05.719 runtime=1 00:10:05.719 ioengine=libaio 00:10:05.719 direct=1 00:10:05.719 bs=4096 00:10:05.719 iodepth=1 00:10:05.719 norandommap=0 00:10:05.719 numjobs=1 00:10:05.719 00:10:05.719 verify_dump=1 00:10:05.719 verify_backlog=512 00:10:05.719 verify_state_save=0 00:10:05.719 do_verify=1 00:10:05.719 verify=crc32c-intel 00:10:05.719 [job0] 00:10:05.719 filename=/dev/nvme0n1 00:10:05.719 [job1] 00:10:05.719 filename=/dev/nvme0n2 00:10:05.719 [job2] 00:10:05.719 filename=/dev/nvme0n3 00:10:05.719 [job3] 00:10:05.719 filename=/dev/nvme0n4 00:10:05.719 Could not set queue depth (nvme0n1) 00:10:05.719 Could not set queue depth (nvme0n2) 00:10:05.719 Could not set queue depth (nvme0n3) 00:10:05.719 Could not set queue depth (nvme0n4) 00:10:05.979 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.979 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.979 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.979 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:05.979 fio-3.35 00:10:05.979 Starting 4 threads 00:10:07.364 00:10:07.364 job0: (groupid=0, jobs=1): err= 0: pid=63416: Sun Sep 29 00:22:22 2024 00:10:07.364 read: IOPS=1980, BW=7920KiB/s (8110kB/s)(7928KiB/1001msec) 00:10:07.364 slat (nsec): min=8519, max=43522, avg=13405.06, stdev=3068.07 00:10:07.364 clat (usec): min=133, max=904, avg=290.54, stdev=72.11 00:10:07.364 lat (usec): min=144, max=921, avg=303.94, stdev=72.81 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 151], 5.00th=[ 223], 10.00th=[ 235], 20.00th=[ 245], 00:10:07.364 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:10:07.364 | 70.00th=[ 306], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 396], 00:10:07.364 | 99.00th=[ 545], 99.50th=[ 570], 99.90th=[ 898], 99.95th=[ 906], 00:10:07.364 | 99.99th=[ 906] 00:10:07.364 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:07.364 slat (usec): min=10, max=102, avg=19.02, stdev= 4.70 00:10:07.364 clat (usec): min=89, max=2528, avg=171.86, stdev=69.51 00:10:07.364 lat (usec): min=106, max=2546, avg=190.88, stdev=70.02 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 97], 5.00th=[ 105], 10.00th=[ 114], 20.00th=[ 125], 00:10:07.364 | 30.00th=[ 137], 40.00th=[ 153], 50.00th=[ 172], 60.00th=[ 190], 00:10:07.364 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 237], 00:10:07.364 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 529], 99.95th=[ 586], 00:10:07.364 | 99.99th=[ 2540] 00:10:07.364 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:07.364 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:07.364 lat (usec) : 100=1.19%, 250=61.39%, 500=36.03%, 750=1.24%, 1000=0.12% 00:10:07.364 lat (msec) : 4=0.02% 00:10:07.364 cpu : usr=1.80%, sys=4.90%, ctx=4032, majf=0, minf=9 00:10:07.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.364 issued rwts: total=1982,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.364 job1: (groupid=0, jobs=1): err= 0: pid=63417: Sun Sep 29 00:22:22 2024 00:10:07.364 read: IOPS=2965, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:10:07.364 slat (nsec): min=10807, max=93481, avg=13298.09, stdev=4662.01 00:10:07.364 clat (usec): min=104, max=1760, avg=167.45, stdev=34.35 00:10:07.364 lat (usec): min=143, max=1772, avg=180.74, stdev=34.77 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:07.364 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:07.364 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:10:07.364 | 99.00th=[ 215], 99.50th=[ 231], 99.90th=[ 367], 99.95th=[ 412], 00:10:07.364 | 99.99th=[ 1762] 00:10:07.364 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:07.364 slat (usec): min=13, max=107, avg=19.38, stdev= 4.41 00:10:07.364 clat (usec): min=93, max=232, avg=128.28, stdev=13.46 00:10:07.364 lat (usec): min=111, max=340, avg=147.66, stdev=14.43 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 101], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:10:07.364 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 131], 00:10:07.364 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 153], 00:10:07.364 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 233], 00:10:07.364 | 99.99th=[ 233] 00:10:07.364 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:07.364 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:07.364 lat (usec) : 100=0.36%, 250=99.54%, 500=0.08% 00:10:07.364 lat (msec) : 2=0.02% 00:10:07.364 cpu : usr=1.80%, sys=8.30%, ctx=6048, majf=0, minf=15 00:10:07.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.364 issued rwts: total=2968,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.364 job2: (groupid=0, jobs=1): err= 0: pid=63418: Sun Sep 29 00:22:22 2024 00:10:07.364 read: IOPS=2646, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:10:07.364 slat (nsec): min=11034, max=46302, avg=13601.20, stdev=3220.30 00:10:07.364 clat (usec): min=140, max=1808, avg=177.87, stdev=38.85 00:10:07.364 lat (usec): min=152, max=1821, avg=191.47, stdev=39.03 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:07.364 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:07.364 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:07.364 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 383], 99.95th=[ 955], 00:10:07.364 | 99.99th=[ 1811] 00:10:07.364 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:07.364 slat (nsec): min=14665, max=98716, avg=20249.68, stdev=5683.71 00:10:07.364 clat (usec): min=99, max=224, avg=137.08, stdev=14.12 00:10:07.364 lat (usec): min=117, max=322, avg=157.33, stdev=15.56 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 111], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:10:07.364 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:10:07.364 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 163], 00:10:07.364 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 206], 00:10:07.364 | 99.99th=[ 225] 00:10:07.364 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:07.364 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:07.364 lat (usec) : 100=0.02%, 250=99.93%, 500=0.02%, 1000=0.02% 00:10:07.364 lat (msec) : 2=0.02% 00:10:07.364 cpu : usr=2.80%, sys=7.00%, ctx=5722, majf=0, minf=9 00:10:07.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.364 issued rwts: total=2649,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.364 job3: (groupid=0, jobs=1): err= 0: pid=63419: Sun Sep 29 00:22:22 2024 00:10:07.364 read: IOPS=1700, BW=6801KiB/s (6964kB/s)(6808KiB/1001msec) 00:10:07.364 slat (nsec): min=8004, max=46739, avg=15083.16, stdev=4778.36 00:10:07.364 clat (usec): min=173, max=584, avg=291.69, stdev=64.05 00:10:07.364 lat (usec): min=185, max=607, avg=306.78, stdev=66.38 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 247], 00:10:07.364 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 281], 00:10:07.364 | 70.00th=[ 297], 80.00th=[ 330], 90.00th=[ 379], 95.00th=[ 453], 00:10:07.364 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 537], 99.95th=[ 586], 00:10:07.364 | 99.99th=[ 586] 00:10:07.364 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:07.364 slat (usec): min=11, max=117, avg=25.05, stdev= 9.52 00:10:07.364 clat (usec): min=107, max=7702, avg=204.76, stdev=188.35 00:10:07.364 lat (usec): min=130, max=7727, avg=229.80, stdev=190.50 00:10:07.364 clat percentiles (usec): 00:10:07.364 | 1.00th=[ 116], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 143], 00:10:07.364 | 30.00th=[ 151], 40.00th=[ 165], 50.00th=[ 184], 60.00th=[ 200], 00:10:07.364 | 70.00th=[ 212], 80.00th=[ 237], 90.00th=[ 310], 95.00th=[ 351], 00:10:07.364 | 99.00th=[ 408], 99.50th=[ 578], 99.90th=[ 1303], 99.95th=[ 2147], 00:10:07.364 | 99.99th=[ 7701] 00:10:07.364 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:07.364 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:07.364 lat (usec) : 250=56.27%, 500=42.88%, 750=0.75% 00:10:07.364 lat (msec) : 2=0.05%, 4=0.03%, 10=0.03% 00:10:07.364 cpu : usr=1.30%, sys=6.50%, ctx=3758, majf=0, minf=15 00:10:07.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.365 issued rwts: total=1702,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.365 00:10:07.365 Run status group 0 (all jobs): 00:10:07.365 READ: bw=36.3MiB/s (38.1MB/s), 6801KiB/s-11.6MiB/s (6964kB/s-12.1MB/s), io=36.3MiB (38.1MB), run=1001-1001msec 00:10:07.365 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:07.365 00:10:07.365 Disk stats (read/write): 00:10:07.365 nvme0n1: ios=1598/2048, merge=0/0, ticks=462/370, in_queue=832, util=88.16% 00:10:07.365 nvme0n2: ios=2608/2668, merge=0/0, ticks=448/365, in_queue=813, util=89.59% 00:10:07.365 nvme0n3: ios=2376/2560, merge=0/0, ticks=442/378, in_queue=820, util=89.78% 00:10:07.365 nvme0n4: ios=1524/1536, merge=0/0, ticks=458/350, in_queue=808, util=89.52% 00:10:07.365 00:22:22 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:07.365 [global] 00:10:07.365 thread=1 00:10:07.365 invalidate=1 00:10:07.365 rw=write 00:10:07.365 time_based=1 00:10:07.365 runtime=1 00:10:07.365 ioengine=libaio 00:10:07.365 direct=1 00:10:07.365 bs=4096 00:10:07.365 iodepth=128 00:10:07.365 norandommap=0 00:10:07.365 numjobs=1 00:10:07.365 00:10:07.365 verify_dump=1 00:10:07.365 verify_backlog=512 00:10:07.365 verify_state_save=0 00:10:07.365 do_verify=1 00:10:07.365 verify=crc32c-intel 00:10:07.365 [job0] 00:10:07.365 filename=/dev/nvme0n1 00:10:07.365 [job1] 00:10:07.365 filename=/dev/nvme0n2 00:10:07.365 [job2] 00:10:07.365 filename=/dev/nvme0n3 00:10:07.365 [job3] 00:10:07.365 filename=/dev/nvme0n4 00:10:07.365 Could not set queue depth (nvme0n1) 00:10:07.365 Could not set queue depth (nvme0n2) 00:10:07.365 Could not set queue depth (nvme0n3) 00:10:07.365 Could not set queue depth (nvme0n4) 00:10:07.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.365 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.365 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.365 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.365 fio-3.35 00:10:07.365 Starting 4 threads 00:10:08.741 00:10:08.741 job0: (groupid=0, jobs=1): err= 0: pid=63474: Sun Sep 29 00:22:24 2024 00:10:08.741 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:10:08.741 slat (usec): min=7, max=6205, avg=176.22, stdev=903.56 00:10:08.741 clat (usec): min=16862, max=25832, avg=23186.25, stdev=1124.93 00:10:08.741 lat (usec): min=22070, max=25902, avg=23362.47, stdev=666.68 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[17695], 5.00th=[22152], 10.00th=[22414], 20.00th=[22676], 00:10:08.741 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:10:08.741 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:10:08.741 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:10:08.741 | 99.99th=[25822] 00:10:08.741 write: IOPS=2904, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1003msec); 0 zone resets 00:10:08.741 slat (usec): min=9, max=6074, avg=181.83, stdev=891.56 00:10:08.741 clat (usec): min=749, max=25126, avg=22888.45, stdev=2580.56 00:10:08.741 lat (usec): min=4856, max=25145, avg=23070.28, stdev=2426.17 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[ 5669], 5.00th=[18482], 10.00th=[22152], 20.00th=[22676], 00:10:08.741 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:10:08.741 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:10:08.741 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:10:08.741 | 99.99th=[25035] 00:10:08.741 bw ( KiB/s): min=10578, max=12288, per=16.81%, avg=11433.00, stdev=1209.15, samples=2 00:10:08.741 iops : min= 2650, max= 3072, avg=2861.00, stdev=298.40, samples=2 00:10:08.741 lat (usec) : 750=0.02% 00:10:08.741 lat (msec) : 10=0.58%, 20=4.22%, 50=95.18% 00:10:08.741 cpu : usr=2.59%, sys=7.29%, ctx=174, majf=0, minf=5 00:10:08.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:08.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.741 issued rwts: total=2560,2913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.741 job1: (groupid=0, jobs=1): err= 0: pid=63475: Sun Sep 29 00:22:24 2024 00:10:08.741 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:08.741 slat (usec): min=7, max=4771, avg=82.32, stdev=412.35 00:10:08.741 clat (usec): min=6388, max=15939, avg=10773.87, stdev=1173.83 00:10:08.741 lat (usec): min=6406, max=16339, avg=10856.19, stdev=1206.30 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:10:08.741 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:10:08.741 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[12649], 00:10:08.741 | 99.00th=[14484], 99.50th=[15139], 99.90th=[15664], 99.95th=[15664], 00:10:08.741 | 99.99th=[15926] 00:10:08.741 write: IOPS=6085, BW=23.8MiB/s (24.9MB/s)(23.8MiB/1003msec); 0 zone resets 00:10:08.741 slat (usec): min=10, max=5357, avg=81.23, stdev=435.71 00:10:08.741 clat (usec): min=160, max=17211, avg=10834.82, stdev=1339.23 00:10:08.741 lat (usec): min=4160, max=17251, avg=10916.05, stdev=1397.91 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10159], 00:10:08.741 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:10:08.741 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12387], 95.00th=[12780], 00:10:08.741 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16909], 99.95th=[16909], 00:10:08.741 | 99.99th=[17171] 00:10:08.741 bw ( KiB/s): min=23185, max=24625, per=35.15%, avg=23905.00, stdev=1018.23, samples=2 00:10:08.741 iops : min= 5796, max= 6156, avg=5976.00, stdev=254.56, samples=2 00:10:08.741 lat (usec) : 250=0.01% 00:10:08.741 lat (msec) : 10=17.47%, 20=82.52% 00:10:08.741 cpu : usr=5.39%, sys=14.17%, ctx=474, majf=0, minf=9 00:10:08.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:08.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.741 issued rwts: total=5632,6104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.741 job2: (groupid=0, jobs=1): err= 0: pid=63476: Sun Sep 29 00:22:24 2024 00:10:08.741 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:08.741 slat (usec): min=7, max=5717, avg=176.35, stdev=904.70 00:10:08.741 clat (usec): min=17016, max=25150, avg=23174.79, stdev=1103.18 00:10:08.741 lat (usec): min=22241, max=25195, avg=23351.14, stdev=630.59 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[17695], 5.00th=[22414], 10.00th=[22414], 20.00th=[22676], 00:10:08.741 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:10:08.741 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:10:08.741 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:10:08.741 | 99.99th=[25035] 00:10:08.741 write: IOPS=2913, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:10:08.741 slat (usec): min=6, max=6141, avg=181.08, stdev=887.13 00:10:08.741 clat (usec): min=574, max=25206, avg=22834.71, stdev=2581.14 00:10:08.741 lat (usec): min=600, max=25237, avg=23015.79, stdev=2427.24 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[ 6325], 5.00th=[18482], 10.00th=[22152], 20.00th=[22676], 00:10:08.741 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:10:08.741 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:10:08.741 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25297], 99.95th=[25297], 00:10:08.741 | 99.99th=[25297] 00:10:08.741 bw ( KiB/s): min=12312, max=12312, per=18.10%, avg=12312.00, stdev= 0.00, samples=1 00:10:08.741 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:10:08.741 lat (usec) : 750=0.07% 00:10:08.741 lat (msec) : 10=0.58%, 20=4.24%, 50=95.11% 00:10:08.741 cpu : usr=2.90%, sys=7.40%, ctx=174, majf=0, minf=13 00:10:08.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:08.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.741 issued rwts: total=2560,2916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.741 job3: (groupid=0, jobs=1): err= 0: pid=63477: Sun Sep 29 00:22:24 2024 00:10:08.741 read: IOPS=4946, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1003msec) 00:10:08.741 slat (usec): min=5, max=3173, avg=94.09, stdev=443.46 00:10:08.741 clat (usec): min=221, max=14440, avg=12461.93, stdev=1087.17 00:10:08.741 lat (usec): min=3307, max=16327, avg=12556.02, stdev=1002.28 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[ 6783], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:10:08.741 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:10:08.741 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13173], 95.00th=[13435], 00:10:08.741 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14353], 99.95th=[14484], 00:10:08.741 | 99.99th=[14484] 00:10:08.741 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:08.741 slat (usec): min=7, max=2840, avg=96.89, stdev=412.65 00:10:08.741 clat (usec): min=9423, max=14309, avg=12670.55, stdev=595.05 00:10:08.741 lat (usec): min=9565, max=15338, avg=12767.43, stdev=463.96 00:10:08.741 clat percentiles (usec): 00:10:08.741 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12125], 20.00th=[12387], 00:10:08.741 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:10:08.741 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:10:08.741 | 99.00th=[13698], 99.50th=[14091], 99.90th=[14222], 99.95th=[14353], 00:10:08.741 | 99.99th=[14353] 00:10:08.741 bw ( KiB/s): min=20439, max=20521, per=30.11%, avg=20480.00, stdev=57.98, samples=2 00:10:08.741 iops : min= 5109, max= 5130, avg=5119.50, stdev=14.85, samples=2 00:10:08.741 lat (usec) : 250=0.01% 00:10:08.741 lat (msec) : 4=0.32%, 10=1.30%, 20=98.37% 00:10:08.741 cpu : usr=4.59%, sys=13.77%, ctx=333, majf=0, minf=13 00:10:08.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:08.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.741 issued rwts: total=4961,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.741 00:10:08.741 Run status group 0 (all jobs): 00:10:08.741 READ: bw=61.2MiB/s (64.2MB/s), 9.97MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=61.4MiB (64.4MB), run=1001-1003msec 00:10:08.741 WRITE: bw=66.4MiB/s (69.6MB/s), 11.3MiB/s-23.8MiB/s (11.9MB/s-24.9MB/s), io=66.6MiB (69.8MB), run=1001-1003msec 00:10:08.741 00:10:08.741 Disk stats (read/write): 00:10:08.742 nvme0n1: ios=2258/2560, merge=0/0, ticks=11457/12810, in_queue=24267, util=88.77% 00:10:08.742 nvme0n2: ios=5101/5120, merge=0/0, ticks=25849/23981, in_queue=49830, util=90.40% 00:10:08.742 nvme0n3: ios=2242/2560, merge=0/0, ticks=11546/12796, in_queue=24342, util=90.37% 00:10:08.742 nvme0n4: ios=4182/4608, merge=0/0, ticks=11710/12439, in_queue=24149, util=90.33% 00:10:08.742 00:22:24 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:08.742 [global] 00:10:08.742 thread=1 00:10:08.742 invalidate=1 00:10:08.742 rw=randwrite 00:10:08.742 time_based=1 00:10:08.742 runtime=1 00:10:08.742 ioengine=libaio 00:10:08.742 direct=1 00:10:08.742 bs=4096 00:10:08.742 iodepth=128 00:10:08.742 norandommap=0 00:10:08.742 numjobs=1 00:10:08.742 00:10:08.742 verify_dump=1 00:10:08.742 verify_backlog=512 00:10:08.742 verify_state_save=0 00:10:08.742 do_verify=1 00:10:08.742 verify=crc32c-intel 00:10:08.742 [job0] 00:10:08.742 filename=/dev/nvme0n1 00:10:08.742 [job1] 00:10:08.742 filename=/dev/nvme0n2 00:10:08.742 [job2] 00:10:08.742 filename=/dev/nvme0n3 00:10:08.742 [job3] 00:10:08.742 filename=/dev/nvme0n4 00:10:08.742 Could not set queue depth (nvme0n1) 00:10:08.742 Could not set queue depth (nvme0n2) 00:10:08.742 Could not set queue depth (nvme0n3) 00:10:08.742 Could not set queue depth (nvme0n4) 00:10:08.742 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.742 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.742 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.742 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:08.742 fio-3.35 00:10:08.742 Starting 4 threads 00:10:10.118 00:10:10.118 job0: (groupid=0, jobs=1): err= 0: pid=63537: Sun Sep 29 00:22:25 2024 00:10:10.118 read: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1004msec) 00:10:10.118 slat (usec): min=3, max=10308, avg=173.89, stdev=835.24 00:10:10.118 clat (usec): min=1925, max=41735, avg=22041.25, stdev=6684.74 00:10:10.118 lat (usec): min=4513, max=41750, avg=22215.14, stdev=6724.20 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[ 6325], 5.00th=[10552], 10.00th=[10945], 20.00th=[17433], 00:10:10.118 | 30.00th=[20579], 40.00th=[22152], 50.00th=[23462], 60.00th=[24249], 00:10:10.118 | 70.00th=[25297], 80.00th=[26608], 90.00th=[29492], 95.00th=[31589], 00:10:10.118 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38011], 99.95th=[39584], 00:10:10.118 | 99.99th=[41681] 00:10:10.118 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:10:10.118 slat (usec): min=5, max=10584, avg=149.49, stdev=586.96 00:10:10.118 clat (usec): min=5644, max=34203, avg=20076.40, stdev=6446.92 00:10:10.118 lat (usec): min=7786, max=34584, avg=20225.89, stdev=6486.23 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:10:10.118 | 30.00th=[16712], 40.00th=[19268], 50.00th=[21890], 60.00th=[22938], 00:10:10.118 | 70.00th=[24249], 80.00th=[25822], 90.00th=[27132], 95.00th=[29492], 00:10:10.118 | 99.00th=[32375], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:10:10.118 | 99.99th=[34341] 00:10:10.118 bw ( KiB/s): min=10240, max=14336, per=18.46%, avg=12288.00, stdev=2896.31, samples=2 00:10:10.118 iops : min= 2560, max= 3584, avg=3072.00, stdev=724.08, samples=2 00:10:10.118 lat (msec) : 2=0.02%, 10=2.74%, 20=32.26%, 50=64.98% 00:10:10.118 cpu : usr=2.19%, sys=8.97%, ctx=877, majf=0, minf=13 00:10:10.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.118 issued rwts: total=2976,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.118 job1: (groupid=0, jobs=1): err= 0: pid=63538: Sun Sep 29 00:22:25 2024 00:10:10.118 read: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:10.118 slat (usec): min=7, max=16150, avg=82.79, stdev=544.02 00:10:10.118 clat (usec): min=1027, max=30342, avg=11153.92, stdev=2702.43 00:10:10.118 lat (usec): min=3088, max=30358, avg=11236.71, stdev=2715.93 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10290], 00:10:10.118 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:10:10.118 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12125], 95.00th=[15401], 00:10:10.118 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30278], 99.95th=[30278], 00:10:10.118 | 99.99th=[30278] 00:10:10.118 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:10.118 slat (usec): min=6, max=19909, avg=87.86, stdev=599.07 00:10:10.118 clat (usec): min=5380, max=33306, avg=11409.16, stdev=4048.01 00:10:10.118 lat (usec): min=6233, max=33333, avg=11497.03, stdev=4039.12 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 9372], 20.00th=[ 9765], 00:10:10.118 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:10:10.118 | 70.00th=[10814], 80.00th=[10945], 90.00th=[13435], 95.00th=[22676], 00:10:10.118 | 99.00th=[32375], 99.50th=[32637], 99.90th=[33162], 99.95th=[33162], 00:10:10.118 | 99.99th=[33424] 00:10:10.118 bw ( KiB/s): min=20577, max=24520, per=33.88%, avg=22548.50, stdev=2788.12, samples=2 00:10:10.118 iops : min= 5144, max= 6130, avg=5637.00, stdev=697.21, samples=2 00:10:10.118 lat (msec) : 2=0.01%, 4=0.25%, 10=20.77%, 20=74.48%, 50=4.49% 00:10:10.118 cpu : usr=4.00%, sys=14.29%, ctx=271, majf=0, minf=9 00:10:10.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.118 issued rwts: total=5625,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.118 job2: (groupid=0, jobs=1): err= 0: pid=63539: Sun Sep 29 00:22:25 2024 00:10:10.118 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:10:10.118 slat (usec): min=3, max=17006, avg=199.87, stdev=887.05 00:10:10.118 clat (usec): min=10196, max=38126, avg=25281.21, stdev=4572.05 00:10:10.118 lat (usec): min=10227, max=38262, avg=25481.08, stdev=4589.35 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[15926], 5.00th=[16712], 10.00th=[19530], 20.00th=[21627], 00:10:10.118 | 30.00th=[23200], 40.00th=[23987], 50.00th=[25297], 60.00th=[26346], 00:10:10.118 | 70.00th=[27657], 80.00th=[29230], 90.00th=[31327], 95.00th=[33162], 00:10:10.118 | 99.00th=[34341], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:10:10.118 | 99.99th=[38011] 00:10:10.118 write: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1003msec); 0 zone resets 00:10:10.118 slat (usec): min=6, max=19246, avg=161.37, stdev=808.11 00:10:10.118 clat (usec): min=1705, max=34855, avg=21661.61, stdev=5036.41 00:10:10.118 lat (usec): min=3749, max=34875, avg=21822.98, stdev=5039.63 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[10028], 5.00th=[14091], 10.00th=[15533], 20.00th=[17433], 00:10:10.118 | 30.00th=[18744], 40.00th=[20317], 50.00th=[21627], 60.00th=[22676], 00:10:10.118 | 70.00th=[24511], 80.00th=[26346], 90.00th=[28181], 95.00th=[29230], 00:10:10.118 | 99.00th=[32375], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:10:10.118 | 99.99th=[34866] 00:10:10.118 bw ( KiB/s): min= 9736, max=12312, per=16.56%, avg=11024.00, stdev=1821.51, samples=2 00:10:10.118 iops : min= 2434, max= 3078, avg=2756.00, stdev=455.38, samples=2 00:10:10.118 lat (msec) : 2=0.02%, 4=0.18%, 10=0.29%, 20=25.29%, 50=74.21% 00:10:10.118 cpu : usr=2.79%, sys=7.78%, ctx=807, majf=0, minf=18 00:10:10.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:10.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.118 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.118 job3: (groupid=0, jobs=1): err= 0: pid=63540: Sun Sep 29 00:22:25 2024 00:10:10.118 read: IOPS=5068, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1004msec) 00:10:10.118 slat (usec): min=7, max=2987, avg=92.85, stdev=435.93 00:10:10.118 clat (usec): min=815, max=13661, avg=12271.26, stdev=1094.60 00:10:10.118 lat (usec): min=3065, max=13671, avg=12364.10, stdev=1006.34 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[ 5997], 5.00th=[11469], 10.00th=[11863], 20.00th=[11994], 00:10:10.118 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:10:10.118 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13173], 00:10:10.118 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13566], 99.95th=[13698], 00:10:10.118 | 99.99th=[13698] 00:10:10.118 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:10.118 slat (usec): min=12, max=2870, avg=95.48, stdev=405.04 00:10:10.118 clat (usec): min=9368, max=14530, avg=12552.62, stdev=562.90 00:10:10.118 lat (usec): min=10719, max=14573, avg=12648.10, stdev=393.38 00:10:10.118 clat percentiles (usec): 00:10:10.118 | 1.00th=[10159], 5.00th=[11994], 10.00th=[12125], 20.00th=[12256], 00:10:10.118 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:10:10.118 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:10:10.118 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14353], 99.95th=[14484], 00:10:10.118 | 99.99th=[14484] 00:10:10.118 bw ( KiB/s): min=20480, max=20521, per=30.80%, avg=20500.50, stdev=28.99, samples=2 00:10:10.118 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:10.118 lat (usec) : 1000=0.01% 00:10:10.118 lat (msec) : 4=0.31%, 10=1.72%, 20=97.95% 00:10:10.119 cpu : usr=4.39%, sys=14.36%, ctx=322, majf=0, minf=9 00:10:10.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:10.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.119 issued rwts: total=5089,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.119 00:10:10.119 Run status group 0 (all jobs): 00:10:10.119 READ: bw=63.2MiB/s (66.3MB/s), 9.97MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=63.5MiB (66.6MB), run=1002-1004msec 00:10:10.119 WRITE: bw=65.0MiB/s (68.2MB/s), 11.2MiB/s-22.0MiB/s (11.8MB/s-23.0MB/s), io=65.3MiB (68.4MB), run=1002-1004msec 00:10:10.119 00:10:10.119 Disk stats (read/write): 00:10:10.119 nvme0n1: ios=2610/2710, merge=0/0, ticks=29552/29286, in_queue=58838, util=88.06% 00:10:10.119 nvme0n2: ios=4654/4946, merge=0/0, ticks=48842/52861, in_queue=101703, util=89.17% 00:10:10.119 nvme0n3: ios=2087/2560, merge=0/0, ticks=26737/31595, in_queue=58332, util=89.29% 00:10:10.119 nvme0n4: ios=4182/4608, merge=0/0, ticks=11484/12399, in_queue=23883, util=90.35% 00:10:10.119 00:22:25 -- target/fio.sh@55 -- # sync 00:10:10.119 00:22:25 -- target/fio.sh@59 -- # fio_pid=63553 00:10:10.119 00:22:25 -- target/fio.sh@61 -- # sleep 3 00:10:10.119 00:22:25 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:10.119 [global] 00:10:10.119 thread=1 00:10:10.119 invalidate=1 00:10:10.119 rw=read 00:10:10.119 time_based=1 00:10:10.119 runtime=10 00:10:10.119 ioengine=libaio 00:10:10.119 direct=1 00:10:10.119 bs=4096 00:10:10.119 iodepth=1 00:10:10.119 norandommap=1 00:10:10.119 numjobs=1 00:10:10.119 00:10:10.119 [job0] 00:10:10.119 filename=/dev/nvme0n1 00:10:10.119 [job1] 00:10:10.119 filename=/dev/nvme0n2 00:10:10.119 [job2] 00:10:10.119 filename=/dev/nvme0n3 00:10:10.119 [job3] 00:10:10.119 filename=/dev/nvme0n4 00:10:10.119 Could not set queue depth (nvme0n1) 00:10:10.119 Could not set queue depth (nvme0n2) 00:10:10.119 Could not set queue depth (nvme0n3) 00:10:10.119 Could not set queue depth (nvme0n4) 00:10:10.119 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.119 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.119 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.119 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.119 fio-3.35 00:10:10.119 Starting 4 threads 00:10:13.402 00:22:28 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:13.402 fio: pid=63601, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.402 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=64024576, buflen=4096 00:10:13.402 00:22:28 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:13.402 fio: pid=63600, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.402 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48136192, buflen=4096 00:10:13.661 00:22:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.661 00:22:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:13.661 fio: pid=63598, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.661 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52154368, buflen=4096 00:10:13.661 00:22:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:13.661 00:22:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:13.920 fio: pid=63599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:13.920 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14864384, buflen=4096 00:10:14.179 00:10:14.179 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63598: Sun Sep 29 00:22:29 2024 00:10:14.179 read: IOPS=3615, BW=14.1MiB/s (14.8MB/s)(49.7MiB/3522msec) 00:10:14.179 slat (usec): min=10, max=9645, avg=15.58, stdev=142.26 00:10:14.179 clat (usec): min=129, max=7704, avg=259.62, stdev=99.31 00:10:14.179 lat (usec): min=140, max=9926, avg=275.20, stdev=174.47 00:10:14.179 clat percentiles (usec): 00:10:14.179 | 1.00th=[ 157], 5.00th=[ 215], 10.00th=[ 229], 20.00th=[ 239], 00:10:14.179 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:10:14.179 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:10:14.179 | 99.00th=[ 322], 99.50th=[ 441], 99.90th=[ 1106], 99.95th=[ 1647], 00:10:14.179 | 99.99th=[ 5604] 00:10:14.179 bw ( KiB/s): min=14104, max=14568, per=22.64%, avg=14373.83, stdev=189.31, samples=6 00:10:14.179 iops : min= 3526, max= 3642, avg=3593.33, stdev=47.31, samples=6 00:10:14.179 lat (usec) : 250=37.45%, 500=62.13%, 750=0.19%, 1000=0.09% 00:10:14.179 lat (msec) : 2=0.10%, 4=0.01%, 10=0.02% 00:10:14.179 cpu : usr=0.85%, sys=4.37%, ctx=12738, majf=0, minf=1 00:10:14.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.179 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.179 issued rwts: total=12734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.179 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63599: Sun Sep 29 00:22:29 2024 00:10:14.180 read: IOPS=5283, BW=20.6MiB/s (21.6MB/s)(78.2MiB/3788msec) 00:10:14.180 slat (usec): min=7, max=16580, avg=17.36, stdev=222.23 00:10:14.180 clat (usec): min=116, max=97205, avg=170.72, stdev=687.04 00:10:14.180 lat (usec): min=127, max=97218, avg=188.08, stdev=723.64 00:10:14.180 clat percentiles (usec): 00:10:14.180 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:10:14.180 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 167], 00:10:14.180 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 204], 00:10:14.180 | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 562], 99.95th=[ 1106], 00:10:14.180 | 99.99th=[ 1811] 00:10:14.180 bw ( KiB/s): min=14091, max=22840, per=33.36%, avg=21181.71, stdev=3169.38, samples=7 00:10:14.180 iops : min= 3522, max= 5710, avg=5295.29, stdev=792.62, samples=7 00:10:14.180 lat (usec) : 250=99.42%, 500=0.44%, 750=0.06%, 1000=0.01% 00:10:14.180 lat (msec) : 2=0.05%, 100=0.01% 00:10:14.180 cpu : usr=1.29%, sys=6.39%, ctx=20030, majf=0, minf=2 00:10:14.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.180 issued rwts: total=20014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.180 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63600: Sun Sep 29 00:22:29 2024 00:10:14.180 read: IOPS=3591, BW=14.0MiB/s (14.7MB/s)(45.9MiB/3272msec) 00:10:14.180 slat (usec): min=11, max=8055, avg=15.69, stdev=102.33 00:10:14.180 clat (usec): min=141, max=2666, avg=261.29, stdev=56.97 00:10:14.180 lat (usec): min=153, max=8422, avg=276.98, stdev=118.04 00:10:14.180 clat percentiles (usec): 00:10:14.180 | 1.00th=[ 174], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 243], 00:10:14.180 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:10:14.180 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:10:14.180 | 99.00th=[ 367], 99.50th=[ 420], 99.90th=[ 906], 99.95th=[ 1352], 00:10:14.180 | 99.99th=[ 2606] 00:10:14.180 bw ( KiB/s): min=14272, max=14656, per=22.71%, avg=14420.00, stdev=142.10, samples=6 00:10:14.180 iops : min= 3568, max= 3664, avg=3605.00, stdev=35.52, samples=6 00:10:14.180 lat (usec) : 250=33.90%, 500=65.79%, 750=0.17%, 1000=0.05% 00:10:14.180 lat (msec) : 2=0.05%, 4=0.03% 00:10:14.180 cpu : usr=1.16%, sys=4.59%, ctx=11756, majf=0, minf=2 00:10:14.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.180 issued rwts: total=11753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.180 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63601: Sun Sep 29 00:22:29 2024 00:10:14.180 read: IOPS=5208, BW=20.3MiB/s (21.3MB/s)(61.1MiB/3001msec) 00:10:14.180 slat (nsec): min=10620, max=62401, avg=13053.48, stdev=3449.34 00:10:14.180 clat (usec): min=138, max=7799, avg=177.60, stdev=68.48 00:10:14.180 lat (usec): min=150, max=7811, avg=190.65, stdev=68.66 00:10:14.180 clat percentiles (usec): 00:10:14.180 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:14.180 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:14.180 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:14.180 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 469], 99.95th=[ 996], 00:10:14.180 | 99.99th=[ 1713] 00:10:14.180 bw ( KiB/s): min=20319, max=21192, per=32.77%, avg=20807.80, stdev=404.16, samples=5 00:10:14.180 iops : min= 5079, max= 5298, avg=5201.80, stdev=101.27, samples=5 00:10:14.180 lat (usec) : 250=99.83%, 500=0.08%, 750=0.03%, 1000=0.02% 00:10:14.180 lat (msec) : 2=0.04%, 10=0.01% 00:10:14.180 cpu : usr=1.43%, sys=6.30%, ctx=15633, majf=0, minf=2 00:10:14.180 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.180 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.180 issued rwts: total=15632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.180 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.180 00:10:14.180 Run status group 0 (all jobs): 00:10:14.180 READ: bw=62.0MiB/s (65.0MB/s), 14.0MiB/s-20.6MiB/s (14.7MB/s-21.6MB/s), io=235MiB (246MB), run=3001-3788msec 00:10:14.180 00:10:14.180 Disk stats (read/write): 00:10:14.180 nvme0n1: ios=12088/0, merge=0/0, ticks=3180/0, in_queue=3180, util=95.42% 00:10:14.180 nvme0n2: ios=18934/0, merge=0/0, ticks=3291/0, in_queue=3291, util=94.97% 00:10:14.180 nvme0n3: ios=11166/0, merge=0/0, ticks=2952/0, in_queue=2952, util=96.46% 00:10:14.180 nvme0n4: ios=14939/0, merge=0/0, ticks=2683/0, in_queue=2683, util=96.59% 00:10:14.180 00:22:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.180 00:22:29 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:14.439 00:22:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.439 00:22:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:14.439 00:22:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.439 00:22:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:14.697 00:22:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.697 00:22:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:14.956 00:22:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:14.956 00:22:30 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:15.226 00:22:30 -- target/fio.sh@69 -- # fio_status=0 00:10:15.226 00:22:30 -- target/fio.sh@70 -- # wait 63553 00:10:15.226 00:22:30 -- target/fio.sh@70 -- # fio_status=4 00:10:15.226 00:22:30 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.484 00:22:31 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.484 00:22:31 -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.484 00:22:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:15.484 00:22:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.484 00:22:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.484 00:22:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:15.484 nvmf hotplug test: fio failed as expected 00:10:15.484 00:22:31 -- common/autotest_common.sh@1210 -- # return 0 00:10:15.484 00:22:31 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:15.484 00:22:31 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:15.484 00:22:31 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.742 00:22:31 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:15.742 00:22:31 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:15.742 00:22:31 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:15.742 00:22:31 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:15.742 00:22:31 -- target/fio.sh@91 -- # nvmftestfini 00:10:15.742 00:22:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:15.742 00:22:31 -- nvmf/common.sh@116 -- # sync 00:10:15.742 00:22:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:15.742 00:22:31 -- nvmf/common.sh@119 -- # set +e 00:10:15.742 00:22:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:15.742 00:22:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:15.742 rmmod nvme_tcp 00:10:15.742 rmmod nvme_fabrics 00:10:15.742 rmmod nvme_keyring 00:10:15.742 00:22:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:15.742 00:22:31 -- nvmf/common.sh@123 -- # set -e 00:10:15.742 00:22:31 -- nvmf/common.sh@124 -- # return 0 00:10:15.742 00:22:31 -- nvmf/common.sh@477 -- # '[' -n 63169 ']' 00:10:15.742 00:22:31 -- nvmf/common.sh@478 -- # killprocess 63169 00:10:15.742 00:22:31 -- common/autotest_common.sh@926 -- # '[' -z 63169 ']' 00:10:15.742 00:22:31 -- common/autotest_common.sh@930 -- # kill -0 63169 00:10:15.742 00:22:31 -- common/autotest_common.sh@931 -- # uname 00:10:15.742 00:22:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:15.742 00:22:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63169 00:10:15.742 killing process with pid 63169 00:10:15.742 00:22:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:15.742 00:22:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:15.742 00:22:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63169' 00:10:15.743 00:22:31 -- common/autotest_common.sh@945 -- # kill 63169 00:10:15.743 00:22:31 -- common/autotest_common.sh@950 -- # wait 63169 00:10:16.002 00:22:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:16.002 00:22:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:16.002 00:22:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:16.002 00:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.002 00:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.002 00:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.002 00:22:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:16.002 00:10:16.002 real 0m19.349s 00:10:16.002 user 1m12.162s 00:10:16.002 sys 0m10.944s 00:10:16.002 00:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.002 00:22:31 -- common/autotest_common.sh@10 -- # set +x 00:10:16.002 ************************************ 00:10:16.002 END TEST nvmf_fio_target 00:10:16.002 ************************************ 00:10:16.002 00:22:31 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:16.002 00:22:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:16.002 00:22:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.002 00:22:31 -- common/autotest_common.sh@10 -- # set +x 00:10:16.002 ************************************ 00:10:16.002 START TEST nvmf_bdevio 00:10:16.002 ************************************ 00:10:16.002 00:22:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:16.002 * Looking for test storage... 00:10:16.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.002 00:22:31 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.002 00:22:31 -- nvmf/common.sh@7 -- # uname -s 00:10:16.002 00:22:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.002 00:22:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.002 00:22:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.002 00:22:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.002 00:22:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.002 00:22:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.002 00:22:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.002 00:22:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.002 00:22:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.002 00:22:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:10:16.002 00:22:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:10:16.002 00:22:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.002 00:22:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.002 00:22:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.002 00:22:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.002 00:22:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.002 00:22:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.002 00:22:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.002 00:22:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.002 00:22:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.002 00:22:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.002 00:22:31 -- paths/export.sh@5 -- # export PATH 00:10:16.002 00:22:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.002 00:22:31 -- nvmf/common.sh@46 -- # : 0 00:10:16.002 00:22:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:16.002 00:22:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:16.002 00:22:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:16.002 00:22:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.002 00:22:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.002 00:22:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:16.002 00:22:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:16.002 00:22:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:16.002 00:22:31 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.002 00:22:31 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.002 00:22:31 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:16.002 00:22:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:16.002 00:22:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.002 00:22:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:16.002 00:22:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:16.002 00:22:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:16.002 00:22:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.002 00:22:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:16.002 00:22:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.002 00:22:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:16.002 00:22:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:16.002 00:22:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.002 00:22:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.002 00:22:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:16.002 00:22:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:16.002 00:22:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.002 00:22:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.002 00:22:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.002 00:22:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.003 00:22:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.003 00:22:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.003 00:22:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.003 00:22:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.003 00:22:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:16.262 00:22:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:16.262 Cannot find device "nvmf_tgt_br" 00:10:16.262 00:22:31 -- nvmf/common.sh@154 -- # true 00:10:16.262 00:22:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.262 Cannot find device "nvmf_tgt_br2" 00:10:16.262 00:22:31 -- nvmf/common.sh@155 -- # true 00:10:16.262 00:22:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:16.262 00:22:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:16.262 Cannot find device "nvmf_tgt_br" 00:10:16.262 00:22:31 -- nvmf/common.sh@157 -- # true 00:10:16.262 00:22:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:16.262 Cannot find device "nvmf_tgt_br2" 00:10:16.262 00:22:31 -- nvmf/common.sh@158 -- # true 00:10:16.262 00:22:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:16.262 00:22:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:16.262 00:22:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.262 00:22:31 -- nvmf/common.sh@161 -- # true 00:10:16.262 00:22:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.262 00:22:31 -- nvmf/common.sh@162 -- # true 00:10:16.262 00:22:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.262 00:22:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.262 00:22:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.262 00:22:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.262 00:22:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.262 00:22:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.262 00:22:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.262 00:22:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:16.262 00:22:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:16.262 00:22:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:16.262 00:22:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:16.262 00:22:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:16.262 00:22:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:16.262 00:22:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.262 00:22:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.262 00:22:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.262 00:22:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:16.262 00:22:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:16.521 00:22:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.521 00:22:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.521 00:22:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.521 00:22:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.521 00:22:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.521 00:22:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:16.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:10:16.521 00:10:16.521 --- 10.0.0.2 ping statistics --- 00:10:16.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.521 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:10:16.521 00:22:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:16.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:16.521 00:10:16.521 --- 10.0.0.3 ping statistics --- 00:10:16.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.521 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:16.521 00:22:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:16.521 00:10:16.521 --- 10.0.0.1 ping statistics --- 00:10:16.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.521 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:16.521 00:22:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.521 00:22:32 -- nvmf/common.sh@421 -- # return 0 00:10:16.521 00:22:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:16.521 00:22:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.521 00:22:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:16.521 00:22:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:16.521 00:22:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.521 00:22:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:16.521 00:22:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:16.521 00:22:32 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:16.521 00:22:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:16.521 00:22:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:16.521 00:22:32 -- common/autotest_common.sh@10 -- # set +x 00:10:16.521 00:22:32 -- nvmf/common.sh@469 -- # nvmfpid=63858 00:10:16.521 00:22:32 -- nvmf/common.sh@470 -- # waitforlisten 63858 00:10:16.521 00:22:32 -- common/autotest_common.sh@819 -- # '[' -z 63858 ']' 00:10:16.521 00:22:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:16.521 00:22:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.521 00:22:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.521 00:22:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.521 00:22:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.521 00:22:32 -- common/autotest_common.sh@10 -- # set +x 00:10:16.521 [2024-09-29 00:22:32.264693] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:16.521 [2024-09-29 00:22:32.264793] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.780 [2024-09-29 00:22:32.401211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.780 [2024-09-29 00:22:32.459322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:16.780 [2024-09-29 00:22:32.459515] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.780 [2024-09-29 00:22:32.459545] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.780 [2024-09-29 00:22:32.459553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.780 [2024-09-29 00:22:32.459928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:16.780 [2024-09-29 00:22:32.460112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:16.780 [2024-09-29 00:22:32.460271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:16.780 [2024-09-29 00:22:32.460275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.718 00:22:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:17.718 00:22:33 -- common/autotest_common.sh@852 -- # return 0 00:10:17.718 00:22:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:17.718 00:22:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:17.718 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 00:22:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.718 00:22:33 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.718 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.718 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 [2024-09-29 00:22:33.267039] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.718 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.718 00:22:33 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.718 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.718 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 Malloc0 00:10:17.718 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.718 00:22:33 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.718 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.718 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.718 00:22:33 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.718 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.718 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.718 00:22:33 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.718 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:17.718 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:17.718 [2024-09-29 00:22:33.329259] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.718 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:17.718 00:22:33 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:17.718 00:22:33 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:17.718 00:22:33 -- nvmf/common.sh@520 -- # config=() 00:10:17.718 00:22:33 -- nvmf/common.sh@520 -- # local subsystem config 00:10:17.718 00:22:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:17.718 00:22:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:17.718 { 00:10:17.718 "params": { 00:10:17.718 "name": "Nvme$subsystem", 00:10:17.718 "trtype": "$TEST_TRANSPORT", 00:10:17.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.718 "adrfam": "ipv4", 00:10:17.718 "trsvcid": "$NVMF_PORT", 00:10:17.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.718 "hdgst": ${hdgst:-false}, 00:10:17.718 "ddgst": ${ddgst:-false} 00:10:17.718 }, 00:10:17.718 "method": "bdev_nvme_attach_controller" 00:10:17.718 } 00:10:17.718 EOF 00:10:17.718 )") 00:10:17.718 00:22:33 -- nvmf/common.sh@542 -- # cat 00:10:17.718 00:22:33 -- nvmf/common.sh@544 -- # jq . 00:10:17.718 00:22:33 -- nvmf/common.sh@545 -- # IFS=, 00:10:17.718 00:22:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:17.718 "params": { 00:10:17.718 "name": "Nvme1", 00:10:17.718 "trtype": "tcp", 00:10:17.718 "traddr": "10.0.0.2", 00:10:17.718 "adrfam": "ipv4", 00:10:17.718 "trsvcid": "4420", 00:10:17.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.718 "hdgst": false, 00:10:17.718 "ddgst": false 00:10:17.718 }, 00:10:17.718 "method": "bdev_nvme_attach_controller" 00:10:17.718 }' 00:10:17.718 [2024-09-29 00:22:33.378056] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:17.718 [2024-09-29 00:22:33.378432] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63899 ] 00:10:17.718 [2024-09-29 00:22:33.515170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.978 [2024-09-29 00:22:33.571265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.978 [2024-09-29 00:22:33.571320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.978 [2024-09-29 00:22:33.571329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.978 [2024-09-29 00:22:33.707366] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:17.978 [2024-09-29 00:22:33.707527] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:17.978 I/O targets: 00:10:17.978 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:17.978 00:10:17.978 00:10:17.978 CUnit - A unit testing framework for C - Version 2.1-3 00:10:17.978 http://cunit.sourceforge.net/ 00:10:17.978 00:10:17.978 00:10:17.978 Suite: bdevio tests on: Nvme1n1 00:10:17.978 Test: blockdev write read block ...passed 00:10:17.978 Test: blockdev write zeroes read block ...passed 00:10:17.978 Test: blockdev write zeroes read no split ...passed 00:10:17.978 Test: blockdev write zeroes read split ...passed 00:10:17.978 Test: blockdev write zeroes read split partial ...passed 00:10:17.978 Test: blockdev reset ...[2024-09-29 00:22:33.739331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:17.978 [2024-09-29 00:22:33.739546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1bc80 (9): Bad file descriptor 00:10:17.978 [2024-09-29 00:22:33.757529] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:17.978 passed 00:10:17.978 Test: blockdev write read 8 blocks ...passed 00:10:17.978 Test: blockdev write read size > 128k ...passed 00:10:17.978 Test: blockdev write read invalid size ...passed 00:10:17.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.978 Test: blockdev write read max offset ...passed 00:10:17.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.978 Test: blockdev writev readv 8 blocks ...passed 00:10:17.978 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.978 Test: blockdev writev readv block ...passed 00:10:17.978 Test: blockdev writev readv size > 128k ...passed 00:10:17.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.978 Test: blockdev comparev and writev ...[2024-09-29 00:22:33.768015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.768065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.768091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.768105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.768427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.768455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.768477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.768490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.769050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.769088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.769111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.769124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.769492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.769519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.769540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.978 [2024-09-29 00:22:33.769553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.978 passed 00:10:17.978 Test: blockdev nvme passthru rw ...passed 00:10:17.978 Test: blockdev nvme passthru vendor specific ...passed 00:10:17.978 Test: blockdev nvme admin passthru ...[2024-09-29 00:22:33.771217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.978 [2024-09-29 00:22:33.771258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.771428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.978 [2024-09-29 00:22:33.771455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.771598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.978 [2024-09-29 00:22:33.771623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.978 [2024-09-29 00:22:33.771731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.978 [2024-09-29 00:22:33.771757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.978 passed 00:10:17.978 Test: blockdev copy ...passed 00:10:17.978 00:10:17.978 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.978 suites 1 1 n/a 0 0 00:10:17.978 tests 23 23 23 0 0 00:10:17.978 asserts 152 152 152 0 n/a 00:10:17.978 00:10:17.978 Elapsed time = 0.159 seconds 00:10:18.384 00:22:33 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.384 00:22:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:18.385 00:22:33 -- common/autotest_common.sh@10 -- # set +x 00:10:18.385 00:22:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:18.385 00:22:33 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:18.385 00:22:33 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:18.385 00:22:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:18.385 00:22:33 -- nvmf/common.sh@116 -- # sync 00:10:18.385 00:22:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:18.385 00:22:34 -- nvmf/common.sh@119 -- # set +e 00:10:18.385 00:22:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:18.385 00:22:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:18.385 rmmod nvme_tcp 00:10:18.385 rmmod nvme_fabrics 00:10:18.385 rmmod nvme_keyring 00:10:18.385 00:22:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:18.385 00:22:34 -- nvmf/common.sh@123 -- # set -e 00:10:18.385 00:22:34 -- nvmf/common.sh@124 -- # return 0 00:10:18.385 00:22:34 -- nvmf/common.sh@477 -- # '[' -n 63858 ']' 00:10:18.385 00:22:34 -- nvmf/common.sh@478 -- # killprocess 63858 00:10:18.385 00:22:34 -- common/autotest_common.sh@926 -- # '[' -z 63858 ']' 00:10:18.385 00:22:34 -- common/autotest_common.sh@930 -- # kill -0 63858 00:10:18.385 00:22:34 -- common/autotest_common.sh@931 -- # uname 00:10:18.385 00:22:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.385 00:22:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63858 00:10:18.385 killing process with pid 63858 00:10:18.385 00:22:34 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:10:18.385 00:22:34 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:10:18.385 00:22:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63858' 00:10:18.385 00:22:34 -- common/autotest_common.sh@945 -- # kill 63858 00:10:18.385 00:22:34 -- common/autotest_common.sh@950 -- # wait 63858 00:10:18.662 00:22:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:18.662 00:22:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:18.662 00:22:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:18.662 00:22:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.662 00:22:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.662 00:22:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.662 00:22:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:18.662 00:10:18.662 real 0m2.619s 00:10:18.662 user 0m8.577s 00:10:18.662 sys 0m0.592s 00:10:18.662 00:22:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.662 ************************************ 00:10:18.662 END TEST nvmf_bdevio 00:10:18.662 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:10:18.662 ************************************ 00:10:18.662 00:22:34 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:18.662 00:22:34 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:18.662 00:22:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:18.662 00:22:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:18.662 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:10:18.662 ************************************ 00:10:18.662 START TEST nvmf_bdevio_no_huge 00:10:18.662 ************************************ 00:10:18.662 00:22:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:18.662 * Looking for test storage... 00:10:18.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.662 00:22:34 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.662 00:22:34 -- nvmf/common.sh@7 -- # uname -s 00:10:18.662 00:22:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.662 00:22:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.662 00:22:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.662 00:22:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.662 00:22:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.662 00:22:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.662 00:22:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.662 00:22:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.662 00:22:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.662 00:22:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:10:18.662 00:22:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:10:18.662 00:22:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.662 00:22:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.662 00:22:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.662 00:22:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.662 00:22:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.662 00:22:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.662 00:22:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.662 00:22:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.662 00:22:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.662 00:22:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.662 00:22:34 -- paths/export.sh@5 -- # export PATH 00:10:18.662 00:22:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.662 00:22:34 -- nvmf/common.sh@46 -- # : 0 00:10:18.662 00:22:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.662 00:22:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.662 00:22:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.662 00:22:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.662 00:22:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.662 00:22:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.662 00:22:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.662 00:22:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.662 00:22:34 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.662 00:22:34 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.662 00:22:34 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:18.662 00:22:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:18.662 00:22:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.662 00:22:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:18.662 00:22:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:18.662 00:22:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:18.662 00:22:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.662 00:22:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.662 00:22:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.662 00:22:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:18.662 00:22:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:18.662 00:22:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.662 00:22:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.662 00:22:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.662 00:22:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:18.662 00:22:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.662 00:22:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.662 00:22:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.662 00:22:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.662 00:22:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.662 00:22:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.662 00:22:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.662 00:22:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.662 00:22:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:18.922 00:22:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:18.922 Cannot find device "nvmf_tgt_br" 00:10:18.922 00:22:34 -- nvmf/common.sh@154 -- # true 00:10:18.922 00:22:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.922 Cannot find device "nvmf_tgt_br2" 00:10:18.922 00:22:34 -- nvmf/common.sh@155 -- # true 00:10:18.922 00:22:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:18.922 00:22:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:18.922 Cannot find device "nvmf_tgt_br" 00:10:18.922 00:22:34 -- nvmf/common.sh@157 -- # true 00:10:18.922 00:22:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:18.922 Cannot find device "nvmf_tgt_br2" 00:10:18.922 00:22:34 -- nvmf/common.sh@158 -- # true 00:10:18.922 00:22:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:18.922 00:22:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:18.922 00:22:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.922 00:22:34 -- nvmf/common.sh@161 -- # true 00:10:18.922 00:22:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.922 00:22:34 -- nvmf/common.sh@162 -- # true 00:10:18.922 00:22:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.922 00:22:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.922 00:22:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.922 00:22:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.922 00:22:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.922 00:22:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.922 00:22:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.922 00:22:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.922 00:22:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.922 00:22:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:18.922 00:22:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:18.922 00:22:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:19.181 00:22:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:19.181 00:22:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.181 00:22:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.181 00:22:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.181 00:22:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:19.181 00:22:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:19.181 00:22:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.181 00:22:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.181 00:22:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.181 00:22:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.181 00:22:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.181 00:22:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:19.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:19.182 00:10:19.182 --- 10.0.0.2 ping statistics --- 00:10:19.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.182 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:19.182 00:22:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:19.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:19.182 00:10:19.182 --- 10.0.0.3 ping statistics --- 00:10:19.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.182 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:19.182 00:22:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:19.182 00:10:19.182 --- 10.0.0.1 ping statistics --- 00:10:19.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.182 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:19.182 00:22:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.182 00:22:34 -- nvmf/common.sh@421 -- # return 0 00:10:19.182 00:22:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:19.182 00:22:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.182 00:22:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:19.182 00:22:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:19.182 00:22:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.182 00:22:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:19.182 00:22:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:19.182 00:22:34 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:19.182 00:22:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:19.182 00:22:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:19.182 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:10:19.182 00:22:34 -- nvmf/common.sh@469 -- # nvmfpid=64076 00:10:19.182 00:22:34 -- nvmf/common.sh@470 -- # waitforlisten 64076 00:10:19.182 00:22:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:19.182 00:22:34 -- common/autotest_common.sh@819 -- # '[' -z 64076 ']' 00:10:19.182 00:22:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.182 00:22:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:19.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.182 00:22:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.182 00:22:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:19.182 00:22:34 -- common/autotest_common.sh@10 -- # set +x 00:10:19.182 [2024-09-29 00:22:34.950816] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:19.182 [2024-09-29 00:22:34.951131] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:19.441 [2024-09-29 00:22:35.097601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.441 [2024-09-29 00:22:35.187088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.441 [2024-09-29 00:22:35.187240] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.441 [2024-09-29 00:22:35.187252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.441 [2024-09-29 00:22:35.187260] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.441 [2024-09-29 00:22:35.187443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:19.441 [2024-09-29 00:22:35.187745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:19.441 [2024-09-29 00:22:35.187799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.441 [2024-09-29 00:22:35.187799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:20.376 00:22:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:20.376 00:22:35 -- common/autotest_common.sh@852 -- # return 0 00:10:20.376 00:22:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:20.376 00:22:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:20.376 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.376 00:22:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.376 00:22:35 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.376 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.376 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.376 [2024-09-29 00:22:35.917089] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.376 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.376 00:22:35 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.376 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.376 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.376 Malloc0 00:10:20.376 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.376 00:22:35 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.376 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.376 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.376 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.376 00:22:35 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.376 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.376 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.376 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.376 00:22:35 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.376 00:22:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:20.376 00:22:35 -- common/autotest_common.sh@10 -- # set +x 00:10:20.376 [2024-09-29 00:22:35.959334] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.376 00:22:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:20.376 00:22:35 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:20.376 00:22:35 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:20.376 00:22:35 -- nvmf/common.sh@520 -- # config=() 00:10:20.376 00:22:35 -- nvmf/common.sh@520 -- # local subsystem config 00:10:20.376 00:22:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:20.376 00:22:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:20.376 { 00:10:20.376 "params": { 00:10:20.376 "name": "Nvme$subsystem", 00:10:20.376 "trtype": "$TEST_TRANSPORT", 00:10:20.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.376 "adrfam": "ipv4", 00:10:20.376 "trsvcid": "$NVMF_PORT", 00:10:20.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.377 "hdgst": ${hdgst:-false}, 00:10:20.377 "ddgst": ${ddgst:-false} 00:10:20.377 }, 00:10:20.377 "method": "bdev_nvme_attach_controller" 00:10:20.377 } 00:10:20.377 EOF 00:10:20.377 )") 00:10:20.377 00:22:35 -- nvmf/common.sh@542 -- # cat 00:10:20.377 00:22:35 -- nvmf/common.sh@544 -- # jq . 00:10:20.377 00:22:35 -- nvmf/common.sh@545 -- # IFS=, 00:10:20.377 00:22:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:20.377 "params": { 00:10:20.377 "name": "Nvme1", 00:10:20.377 "trtype": "tcp", 00:10:20.377 "traddr": "10.0.0.2", 00:10:20.377 "adrfam": "ipv4", 00:10:20.377 "trsvcid": "4420", 00:10:20.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.377 "hdgst": false, 00:10:20.377 "ddgst": false 00:10:20.377 }, 00:10:20.377 "method": "bdev_nvme_attach_controller" 00:10:20.377 }' 00:10:20.377 [2024-09-29 00:22:36.011694] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:20.377 [2024-09-29 00:22:36.011763] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64112 ] 00:10:20.377 [2024-09-29 00:22:36.144683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.635 [2024-09-29 00:22:36.247950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.635 [2024-09-29 00:22:36.248088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.635 [2024-09-29 00:22:36.248093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.635 [2024-09-29 00:22:36.399224] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:20.635 [2024-09-29 00:22:36.399509] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:20.635 I/O targets: 00:10:20.635 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:20.635 00:10:20.635 00:10:20.635 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.635 http://cunit.sourceforge.net/ 00:10:20.635 00:10:20.635 00:10:20.635 Suite: bdevio tests on: Nvme1n1 00:10:20.635 Test: blockdev write read block ...passed 00:10:20.635 Test: blockdev write zeroes read block ...passed 00:10:20.635 Test: blockdev write zeroes read no split ...passed 00:10:20.635 Test: blockdev write zeroes read split ...passed 00:10:20.635 Test: blockdev write zeroes read split partial ...passed 00:10:20.635 Test: blockdev reset ...[2024-09-29 00:22:36.443599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:20.635 [2024-09-29 00:22:36.443773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feb680 (9): Bad file descriptor 00:10:20.635 passed 00:10:20.635 Test: blockdev write read 8 blocks ...[2024-09-29 00:22:36.458456] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.635 passed 00:10:20.635 Test: blockdev write read size > 128k ...passed 00:10:20.635 Test: blockdev write read invalid size ...passed 00:10:20.635 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.635 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.635 Test: blockdev write read max offset ...passed 00:10:20.635 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.635 Test: blockdev writev readv 8 blocks ...passed 00:10:20.635 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.635 Test: blockdev writev readv block ...passed 00:10:20.635 Test: blockdev writev readv size > 128k ...passed 00:10:20.635 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.635 Test: blockdev comparev and writev ...[2024-09-29 00:22:36.469983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.470440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.470477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.470491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.470839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.470868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.470889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.470902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.471205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.471231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.471253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.471266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.471581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.471607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.471628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:20.635 [2024-09-29 00:22:36.471640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:20.635 passed 00:10:20.635 Test: blockdev nvme passthru rw ...passed 00:10:20.635 Test: blockdev nvme passthru vendor specific ...[2024-09-29 00:22:36.472954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.635 passed 00:10:20.635 Test: blockdev nvme admin passthru ...[2024-09-29 00:22:36.473219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.473353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.635 [2024-09-29 00:22:36.473375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.473483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.635 [2024-09-29 00:22:36.473504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:20.635 [2024-09-29 00:22:36.473616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:20.635 [2024-09-29 00:22:36.473651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:20.894 passed 00:10:20.894 Test: blockdev copy ...passed 00:10:20.894 00:10:20.894 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.894 suites 1 1 n/a 0 0 00:10:20.894 tests 23 23 23 0 0 00:10:20.894 asserts 152 152 152 0 n/a 00:10:20.894 00:10:20.894 Elapsed time = 0.166 seconds 00:10:21.153 00:22:36 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.153 00:22:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.153 00:22:36 -- common/autotest_common.sh@10 -- # set +x 00:10:21.153 00:22:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.153 00:22:36 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:21.153 00:22:36 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:21.153 00:22:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:21.153 00:22:36 -- nvmf/common.sh@116 -- # sync 00:10:21.153 00:22:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:21.153 00:22:36 -- nvmf/common.sh@119 -- # set +e 00:10:21.153 00:22:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:21.153 00:22:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:21.153 rmmod nvme_tcp 00:10:21.153 rmmod nvme_fabrics 00:10:21.153 rmmod nvme_keyring 00:10:21.153 00:22:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:21.153 00:22:36 -- nvmf/common.sh@123 -- # set -e 00:10:21.153 00:22:36 -- nvmf/common.sh@124 -- # return 0 00:10:21.153 00:22:36 -- nvmf/common.sh@477 -- # '[' -n 64076 ']' 00:10:21.153 00:22:36 -- nvmf/common.sh@478 -- # killprocess 64076 00:10:21.153 00:22:36 -- common/autotest_common.sh@926 -- # '[' -z 64076 ']' 00:10:21.153 00:22:36 -- common/autotest_common.sh@930 -- # kill -0 64076 00:10:21.153 00:22:36 -- common/autotest_common.sh@931 -- # uname 00:10:21.153 00:22:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:21.153 00:22:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64076 00:10:21.412 killing process with pid 64076 00:10:21.412 00:22:37 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:10:21.412 00:22:37 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:10:21.412 00:22:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64076' 00:10:21.412 00:22:37 -- common/autotest_common.sh@945 -- # kill 64076 00:10:21.412 00:22:37 -- common/autotest_common.sh@950 -- # wait 64076 00:10:21.672 00:22:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:21.672 00:22:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:21.672 00:22:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:21.672 00:22:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.672 00:22:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:21.672 00:22:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.672 00:22:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.672 00:22:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.672 00:22:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:21.672 ************************************ 00:10:21.672 END TEST nvmf_bdevio_no_huge 00:10:21.672 ************************************ 00:10:21.672 00:10:21.672 real 0m3.003s 00:10:21.672 user 0m9.825s 00:10:21.672 sys 0m1.084s 00:10:21.672 00:22:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:21.672 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:10:21.672 00:22:37 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:21.672 00:22:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:21.672 00:22:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:21.672 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:10:21.672 ************************************ 00:10:21.672 START TEST nvmf_tls 00:10:21.672 ************************************ 00:10:21.672 00:22:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:21.672 * Looking for test storage... 00:10:21.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.931 00:22:37 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.931 00:22:37 -- nvmf/common.sh@7 -- # uname -s 00:10:21.931 00:22:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.931 00:22:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.931 00:22:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.931 00:22:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.931 00:22:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.931 00:22:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.931 00:22:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.931 00:22:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.931 00:22:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.931 00:22:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.931 00:22:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:10:21.931 00:22:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:10:21.931 00:22:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.931 00:22:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.931 00:22:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.931 00:22:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.931 00:22:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.931 00:22:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.931 00:22:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.931 00:22:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.931 00:22:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.931 00:22:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.931 00:22:37 -- paths/export.sh@5 -- # export PATH 00:10:21.931 00:22:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.931 00:22:37 -- nvmf/common.sh@46 -- # : 0 00:10:21.931 00:22:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:21.931 00:22:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:21.931 00:22:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:21.931 00:22:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.931 00:22:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.931 00:22:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:21.931 00:22:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:21.931 00:22:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:21.931 00:22:37 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:21.931 00:22:37 -- target/tls.sh@71 -- # nvmftestinit 00:10:21.931 00:22:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:21.931 00:22:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.931 00:22:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:21.931 00:22:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:21.931 00:22:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:21.931 00:22:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.931 00:22:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.931 00:22:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.931 00:22:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:21.931 00:22:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:21.931 00:22:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:21.932 00:22:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:21.932 00:22:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:21.932 00:22:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:21.932 00:22:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.932 00:22:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.932 00:22:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.932 00:22:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:21.932 00:22:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.932 00:22:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.932 00:22:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.932 00:22:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.932 00:22:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.932 00:22:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.932 00:22:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.932 00:22:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.932 00:22:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:21.932 00:22:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:21.932 Cannot find device "nvmf_tgt_br" 00:10:21.932 00:22:37 -- nvmf/common.sh@154 -- # true 00:10:21.932 00:22:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.932 Cannot find device "nvmf_tgt_br2" 00:10:21.932 00:22:37 -- nvmf/common.sh@155 -- # true 00:10:21.932 00:22:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:21.932 00:22:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:21.932 Cannot find device "nvmf_tgt_br" 00:10:21.932 00:22:37 -- nvmf/common.sh@157 -- # true 00:10:21.932 00:22:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:21.932 Cannot find device "nvmf_tgt_br2" 00:10:21.932 00:22:37 -- nvmf/common.sh@158 -- # true 00:10:21.932 00:22:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:21.932 00:22:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:21.932 00:22:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.932 00:22:37 -- nvmf/common.sh@161 -- # true 00:10:21.932 00:22:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.932 00:22:37 -- nvmf/common.sh@162 -- # true 00:10:21.932 00:22:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.932 00:22:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.932 00:22:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.932 00:22:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.932 00:22:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.932 00:22:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.932 00:22:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.932 00:22:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:22.192 00:22:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:22.192 00:22:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:22.192 00:22:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:22.192 00:22:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:22.192 00:22:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:22.192 00:22:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.192 00:22:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.192 00:22:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:22.192 00:22:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:22.192 00:22:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:22.192 00:22:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:22.192 00:22:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:22.192 00:22:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:22.192 00:22:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:22.192 00:22:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:22.192 00:22:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:22.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:10:22.192 00:10:22.192 --- 10.0.0.2 ping statistics --- 00:10:22.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.192 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:22.192 00:22:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:22.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:22.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:22.192 00:10:22.192 --- 10.0.0.3 ping statistics --- 00:10:22.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.192 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:22.192 00:22:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:22.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:10:22.192 00:10:22.192 --- 10.0.0.1 ping statistics --- 00:10:22.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.192 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:22.192 00:22:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.192 00:22:37 -- nvmf/common.sh@421 -- # return 0 00:10:22.192 00:22:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:22.192 00:22:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.192 00:22:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:22.192 00:22:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:22.192 00:22:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.192 00:22:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:22.192 00:22:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:22.192 00:22:37 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:22.192 00:22:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:22.192 00:22:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:22.192 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.192 00:22:37 -- nvmf/common.sh@469 -- # nvmfpid=64291 00:10:22.192 00:22:37 -- nvmf/common.sh@470 -- # waitforlisten 64291 00:10:22.192 00:22:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:22.192 00:22:37 -- common/autotest_common.sh@819 -- # '[' -z 64291 ']' 00:10:22.192 00:22:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.192 00:22:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:22.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.192 00:22:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.192 00:22:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:22.192 00:22:37 -- common/autotest_common.sh@10 -- # set +x 00:10:22.192 [2024-09-29 00:22:37.977116] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:22.192 [2024-09-29 00:22:37.977402] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.452 [2024-09-29 00:22:38.121249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.452 [2024-09-29 00:22:38.188666] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.452 [2024-09-29 00:22:38.188837] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.452 [2024-09-29 00:22:38.188852] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.452 [2024-09-29 00:22:38.188863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.452 [2024-09-29 00:22:38.188899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.391 00:22:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:23.391 00:22:38 -- common/autotest_common.sh@852 -- # return 0 00:10:23.391 00:22:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:23.391 00:22:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:23.391 00:22:38 -- common/autotest_common.sh@10 -- # set +x 00:10:23.391 00:22:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.391 00:22:38 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:23.391 00:22:38 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:23.654 true 00:10:23.654 00:22:39 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:23.654 00:22:39 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:23.912 00:22:39 -- target/tls.sh@82 -- # version=0 00:10:23.912 00:22:39 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:23.912 00:22:39 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:24.171 00:22:39 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:24.171 00:22:39 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:24.429 00:22:40 -- target/tls.sh@90 -- # version=13 00:10:24.429 00:22:40 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:24.429 00:22:40 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:24.687 00:22:40 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:24.687 00:22:40 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:24.946 00:22:40 -- target/tls.sh@98 -- # version=7 00:10:24.946 00:22:40 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:24.946 00:22:40 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:24.946 00:22:40 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:25.205 00:22:40 -- target/tls.sh@105 -- # ktls=false 00:10:25.205 00:22:40 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:25.205 00:22:40 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:25.464 00:22:41 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:25.464 00:22:41 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:25.464 00:22:41 -- target/tls.sh@113 -- # ktls=true 00:10:25.464 00:22:41 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:25.464 00:22:41 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:25.722 00:22:41 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:25.722 00:22:41 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:25.982 00:22:41 -- target/tls.sh@121 -- # ktls=false 00:10:25.982 00:22:41 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:25.982 00:22:41 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:25.982 00:22:41 -- target/tls.sh@49 -- # local key hash crc 00:10:25.982 00:22:41 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:25.982 00:22:41 -- target/tls.sh@51 -- # hash=01 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # gzip -1 -c 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # tail -c8 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # head -c 4 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # crc='p$H�' 00:10:25.982 00:22:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:25.982 00:22:41 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:25.982 00:22:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:25.982 00:22:41 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:25.982 00:22:41 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:25.982 00:22:41 -- target/tls.sh@49 -- # local key hash crc 00:10:25.982 00:22:41 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:25.982 00:22:41 -- target/tls.sh@51 -- # hash=01 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # gzip -1 -c 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # tail -c8 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # head -c 4 00:10:25.982 00:22:41 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:25.982 00:22:41 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:25.982 00:22:41 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:25.982 00:22:41 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:25.982 00:22:41 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:25.982 00:22:41 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:25.982 00:22:41 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:25.982 00:22:41 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:25.982 00:22:41 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:25.982 00:22:41 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:25.982 00:22:41 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:25.982 00:22:41 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:26.241 00:22:41 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:26.500 00:22:42 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:26.500 00:22:42 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:26.500 00:22:42 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:26.759 [2024-09-29 00:22:42.571991] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.759 00:22:42 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:27.016 00:22:42 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:27.275 [2024-09-29 00:22:42.992080] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:27.275 [2024-09-29 00:22:42.992319] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.275 00:22:43 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:27.533 malloc0 00:10:27.533 00:22:43 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:27.792 00:22:43 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:28.051 00:22:43 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:38.105 Initializing NVMe Controllers 00:10:38.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.105 Initialization complete. Launching workers. 00:10:38.105 ======================================================== 00:10:38.105 Latency(us) 00:10:38.105 Device Information : IOPS MiB/s Average min max 00:10:38.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10939.90 42.73 5851.27 1066.59 8300.02 00:10:38.105 ======================================================== 00:10:38.105 Total : 10939.90 42.73 5851.27 1066.59 8300.02 00:10:38.105 00:10:38.106 00:22:53 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:38.106 00:22:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:38.106 00:22:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:38.106 00:22:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:38.106 00:22:53 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:38.106 00:22:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:38.106 00:22:53 -- target/tls.sh@28 -- # bdevperf_pid=64534 00:10:38.106 00:22:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:38.106 00:22:53 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:38.106 00:22:53 -- target/tls.sh@31 -- # waitforlisten 64534 /var/tmp/bdevperf.sock 00:10:38.106 00:22:53 -- common/autotest_common.sh@819 -- # '[' -z 64534 ']' 00:10:38.106 00:22:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:38.106 00:22:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:38.106 00:22:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:38.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:38.106 00:22:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:38.106 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:10:38.365 [2024-09-29 00:22:53.969638] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:38.365 [2024-09-29 00:22:53.969944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64534 ] 00:10:38.365 [2024-09-29 00:22:54.108209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.365 [2024-09-29 00:22:54.180812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.303 00:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:39.303 00:22:54 -- common/autotest_common.sh@852 -- # return 0 00:10:39.303 00:22:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:39.303 [2024-09-29 00:22:55.094270] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:39.563 TLSTESTn1 00:10:39.563 00:22:55 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:39.563 Running I/O for 10 seconds... 00:10:49.543 00:10:49.544 Latency(us) 00:10:49.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.544 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:49.544 Verification LBA range: start 0x0 length 0x2000 00:10:49.544 TLSTESTn1 : 10.01 6209.62 24.26 0.00 0.00 20581.03 4051.32 20614.05 00:10:49.544 =================================================================================================================== 00:10:49.544 Total : 6209.62 24.26 0.00 0.00 20581.03 4051.32 20614.05 00:10:49.544 0 00:10:49.544 00:23:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:49.544 00:23:05 -- target/tls.sh@45 -- # killprocess 64534 00:10:49.544 00:23:05 -- common/autotest_common.sh@926 -- # '[' -z 64534 ']' 00:10:49.544 00:23:05 -- common/autotest_common.sh@930 -- # kill -0 64534 00:10:49.544 00:23:05 -- common/autotest_common.sh@931 -- # uname 00:10:49.544 00:23:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:49.544 00:23:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64534 00:10:49.544 killing process with pid 64534 00:10:49.544 Received shutdown signal, test time was about 10.000000 seconds 00:10:49.544 00:10:49.544 Latency(us) 00:10:49.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.544 =================================================================================================================== 00:10:49.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:49.544 00:23:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:49.544 00:23:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:49.544 00:23:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64534' 00:10:49.544 00:23:05 -- common/autotest_common.sh@945 -- # kill 64534 00:10:49.544 00:23:05 -- common/autotest_common.sh@950 -- # wait 64534 00:10:49.810 00:23:05 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:49.810 00:23:05 -- common/autotest_common.sh@640 -- # local es=0 00:10:49.810 00:23:05 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:49.810 00:23:05 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:49.810 00:23:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.810 00:23:05 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:49.810 00:23:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:49.810 00:23:05 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:49.810 00:23:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:49.810 00:23:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:49.810 00:23:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:49.810 00:23:05 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:10:49.810 00:23:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.810 00:23:05 -- target/tls.sh@28 -- # bdevperf_pid=64667 00:10:49.810 00:23:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:49.810 00:23:05 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:49.810 00:23:05 -- target/tls.sh@31 -- # waitforlisten 64667 /var/tmp/bdevperf.sock 00:10:49.810 00:23:05 -- common/autotest_common.sh@819 -- # '[' -z 64667 ']' 00:10:49.810 00:23:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.810 00:23:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:49.810 00:23:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.810 00:23:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:49.810 00:23:05 -- common/autotest_common.sh@10 -- # set +x 00:10:49.810 [2024-09-29 00:23:05.619216] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:49.810 [2024-09-29 00:23:05.619527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64667 ] 00:10:50.069 [2024-09-29 00:23:05.751669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.069 [2024-09-29 00:23:05.804077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.003 00:23:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:51.003 00:23:06 -- common/autotest_common.sh@852 -- # return 0 00:10:51.003 00:23:06 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:51.003 [2024-09-29 00:23:06.821626] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:51.003 [2024-09-29 00:23:06.830607] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:51.003 [2024-09-29 00:23:06.831041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f4650 (107): Transport endpoint is not connected 00:10:51.003 [2024-09-29 00:23:06.832035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f4650 (9): Bad file descriptor 00:10:51.003 [2024-09-29 00:23:06.833032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:51.003 [2024-09-29 00:23:06.833388] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:51.003 [2024-09-29 00:23:06.833402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:51.003 request: 00:10:51.003 { 00:10:51.003 "name": "TLSTEST", 00:10:51.003 "trtype": "tcp", 00:10:51.003 "traddr": "10.0.0.2", 00:10:51.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.003 "adrfam": "ipv4", 00:10:51.004 "trsvcid": "4420", 00:10:51.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.004 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:10:51.004 "method": "bdev_nvme_attach_controller", 00:10:51.004 "req_id": 1 00:10:51.004 } 00:10:51.004 Got JSON-RPC error response 00:10:51.004 response: 00:10:51.004 { 00:10:51.004 "code": -32602, 00:10:51.004 "message": "Invalid parameters" 00:10:51.004 } 00:10:51.263 00:23:06 -- target/tls.sh@36 -- # killprocess 64667 00:10:51.263 00:23:06 -- common/autotest_common.sh@926 -- # '[' -z 64667 ']' 00:10:51.263 00:23:06 -- common/autotest_common.sh@930 -- # kill -0 64667 00:10:51.263 00:23:06 -- common/autotest_common.sh@931 -- # uname 00:10:51.263 00:23:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:51.263 00:23:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64667 00:10:51.263 killing process with pid 64667 00:10:51.263 Received shutdown signal, test time was about 10.000000 seconds 00:10:51.263 00:10:51.263 Latency(us) 00:10:51.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.263 =================================================================================================================== 00:10:51.263 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:51.263 00:23:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:51.263 00:23:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:51.263 00:23:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64667' 00:10:51.263 00:23:06 -- common/autotest_common.sh@945 -- # kill 64667 00:10:51.263 00:23:06 -- common/autotest_common.sh@950 -- # wait 64667 00:10:51.263 00:23:07 -- target/tls.sh@37 -- # return 1 00:10:51.263 00:23:07 -- common/autotest_common.sh@643 -- # es=1 00:10:51.263 00:23:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:51.263 00:23:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:51.263 00:23:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:51.263 00:23:07 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.263 00:23:07 -- common/autotest_common.sh@640 -- # local es=0 00:10:51.263 00:23:07 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.263 00:23:07 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:51.263 00:23:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:51.263 00:23:07 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:51.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:51.263 00:23:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:51.263 00:23:07 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:51.263 00:23:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:51.263 00:23:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:51.263 00:23:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:51.263 00:23:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:51.263 00:23:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:51.263 00:23:07 -- target/tls.sh@28 -- # bdevperf_pid=64695 00:10:51.263 00:23:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:51.263 00:23:07 -- target/tls.sh@31 -- # waitforlisten 64695 /var/tmp/bdevperf.sock 00:10:51.263 00:23:07 -- common/autotest_common.sh@819 -- # '[' -z 64695 ']' 00:10:51.263 00:23:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:51.263 00:23:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:51.263 00:23:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:51.263 00:23:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:51.263 00:23:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:51.263 00:23:07 -- common/autotest_common.sh@10 -- # set +x 00:10:51.263 [2024-09-29 00:23:07.101309] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:51.263 [2024-09-29 00:23:07.101430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64695 ] 00:10:51.522 [2024-09-29 00:23:07.227636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.522 [2024-09-29 00:23:07.282535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.455 00:23:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:52.455 00:23:08 -- common/autotest_common.sh@852 -- # return 0 00:10:52.456 00:23:08 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:52.784 [2024-09-29 00:23:08.317724] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:52.784 [2024-09-29 00:23:08.322707] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:52.784 [2024-09-29 00:23:08.322903] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:52.784 [2024-09-29 00:23:08.323079] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:52.784 [2024-09-29 00:23:08.323457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9f650 (107): Transport endpoint is not connected 00:10:52.784 [2024-09-29 00:23:08.324442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9f650 (9): Bad file descriptor 00:10:52.784 [2024-09-29 00:23:08.325438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:52.784 [2024-09-29 00:23:08.325466] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:52.784 [2024-09-29 00:23:08.325477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:52.784 request: 00:10:52.784 { 00:10:52.784 "name": "TLSTEST", 00:10:52.784 "trtype": "tcp", 00:10:52.784 "traddr": "10.0.0.2", 00:10:52.784 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:52.784 "adrfam": "ipv4", 00:10:52.784 "trsvcid": "4420", 00:10:52.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.784 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:52.784 "method": "bdev_nvme_attach_controller", 00:10:52.784 "req_id": 1 00:10:52.784 } 00:10:52.784 Got JSON-RPC error response 00:10:52.784 response: 00:10:52.784 { 00:10:52.784 "code": -32602, 00:10:52.784 "message": "Invalid parameters" 00:10:52.784 } 00:10:52.784 00:23:08 -- target/tls.sh@36 -- # killprocess 64695 00:10:52.784 00:23:08 -- common/autotest_common.sh@926 -- # '[' -z 64695 ']' 00:10:52.784 00:23:08 -- common/autotest_common.sh@930 -- # kill -0 64695 00:10:52.784 00:23:08 -- common/autotest_common.sh@931 -- # uname 00:10:52.784 00:23:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:52.784 00:23:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64695 00:10:52.784 00:23:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:52.784 00:23:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:52.784 killing process with pid 64695 00:10:52.784 00:23:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64695' 00:10:52.784 00:23:08 -- common/autotest_common.sh@945 -- # kill 64695 00:10:52.784 00:23:08 -- common/autotest_common.sh@950 -- # wait 64695 00:10:52.784 Received shutdown signal, test time was about 10.000000 seconds 00:10:52.784 00:10:52.784 Latency(us) 00:10:52.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.784 =================================================================================================================== 00:10:52.784 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:52.784 00:23:08 -- target/tls.sh@37 -- # return 1 00:10:52.784 00:23:08 -- common/autotest_common.sh@643 -- # es=1 00:10:52.784 00:23:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:52.784 00:23:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:52.784 00:23:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:52.784 00:23:08 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:52.784 00:23:08 -- common/autotest_common.sh@640 -- # local es=0 00:10:52.784 00:23:08 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:52.785 00:23:08 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:52.785 00:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:52.785 00:23:08 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:52.785 00:23:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:52.785 00:23:08 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:52.785 00:23:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:52.785 00:23:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:52.785 00:23:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:52.785 00:23:08 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:52.785 00:23:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:52.785 00:23:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:52.785 00:23:08 -- target/tls.sh@28 -- # bdevperf_pid=64722 00:10:52.785 00:23:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:52.785 00:23:08 -- target/tls.sh@31 -- # waitforlisten 64722 /var/tmp/bdevperf.sock 00:10:52.785 00:23:08 -- common/autotest_common.sh@819 -- # '[' -z 64722 ']' 00:10:52.785 00:23:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:52.785 00:23:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:52.785 00:23:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:52.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:52.785 00:23:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:52.785 00:23:08 -- common/autotest_common.sh@10 -- # set +x 00:10:52.785 [2024-09-29 00:23:08.592897] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:52.785 [2024-09-29 00:23:08.593303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64722 ] 00:10:53.042 [2024-09-29 00:23:08.724822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.042 [2024-09-29 00:23:08.775260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.975 00:23:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:53.975 00:23:09 -- common/autotest_common.sh@852 -- # return 0 00:10:53.975 00:23:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:54.233 [2024-09-29 00:23:09.827125] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:54.233 [2024-09-29 00:23:09.831918] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:54.233 [2024-09-29 00:23:09.831954] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:54.233 [2024-09-29 00:23:09.832019] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:54.233 [2024-09-29 00:23:09.832781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129e650 (107): Transport endpoint is not connected 00:10:54.233 [2024-09-29 00:23:09.833766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129e650 (9): Bad file descriptor 00:10:54.233 request: 00:10:54.233 { 00:10:54.233 "name": "TLSTEST", 00:10:54.233 "trtype": "tcp", 00:10:54.233 "traddr": "10.0.0.2", 00:10:54.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.233 "adrfam": "ipv4", 00:10:54.233 "trsvcid": "4420", 00:10:54.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.234 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:54.234 "method": "bdev_nvme_attach_controller", 00:10:54.234 "req_id": 1 00:10:54.234 } 00:10:54.234 Got JSON-RPC error response 00:10:54.234 response: 00:10:54.234 { 00:10:54.234 "code": -32602, 00:10:54.234 "message": "Invalid parameters" 00:10:54.234 } 00:10:54.234 [2024-09-29 00:23:09.834761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:54.234 [2024-09-29 00:23:09.834789] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:54.234 [2024-09-29 00:23:09.834816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:54.234 00:23:09 -- target/tls.sh@36 -- # killprocess 64722 00:10:54.234 00:23:09 -- common/autotest_common.sh@926 -- # '[' -z 64722 ']' 00:10:54.234 00:23:09 -- common/autotest_common.sh@930 -- # kill -0 64722 00:10:54.234 00:23:09 -- common/autotest_common.sh@931 -- # uname 00:10:54.234 00:23:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:54.234 00:23:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64722 00:10:54.234 00:23:09 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:54.234 killing process with pid 64722 00:10:54.234 Received shutdown signal, test time was about 10.000000 seconds 00:10:54.234 00:10:54.234 Latency(us) 00:10:54.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.234 =================================================================================================================== 00:10:54.234 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:54.234 00:23:09 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:54.234 00:23:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64722' 00:10:54.234 00:23:09 -- common/autotest_common.sh@945 -- # kill 64722 00:10:54.234 00:23:09 -- common/autotest_common.sh@950 -- # wait 64722 00:10:54.234 00:23:10 -- target/tls.sh@37 -- # return 1 00:10:54.234 00:23:10 -- common/autotest_common.sh@643 -- # es=1 00:10:54.234 00:23:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:54.234 00:23:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:54.234 00:23:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:54.234 00:23:10 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:54.234 00:23:10 -- common/autotest_common.sh@640 -- # local es=0 00:10:54.234 00:23:10 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:54.234 00:23:10 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:10:54.234 00:23:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:54.234 00:23:10 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:10:54.234 00:23:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:54.234 00:23:10 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:54.234 00:23:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:54.234 00:23:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:54.234 00:23:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:54.234 00:23:10 -- target/tls.sh@23 -- # psk= 00:10:54.234 00:23:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:54.234 00:23:10 -- target/tls.sh@28 -- # bdevperf_pid=64744 00:10:54.234 00:23:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:54.234 00:23:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:54.234 00:23:10 -- target/tls.sh@31 -- # waitforlisten 64744 /var/tmp/bdevperf.sock 00:10:54.234 00:23:10 -- common/autotest_common.sh@819 -- # '[' -z 64744 ']' 00:10:54.234 00:23:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:54.234 00:23:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:54.234 00:23:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:54.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:54.234 00:23:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:54.234 00:23:10 -- common/autotest_common.sh@10 -- # set +x 00:10:54.492 [2024-09-29 00:23:10.123406] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:54.492 [2024-09-29 00:23:10.123786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64744 ] 00:10:54.492 [2024-09-29 00:23:10.261635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.492 [2024-09-29 00:23:10.316408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.426 00:23:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:55.426 00:23:11 -- common/autotest_common.sh@852 -- # return 0 00:10:55.426 00:23:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:55.684 [2024-09-29 00:23:11.291090] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:55.684 [2024-09-29 00:23:11.292939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f3010 (9): Bad file descriptor 00:10:55.684 [2024-09-29 00:23:11.293949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error request: 00:10:55.684 { 00:10:55.684 "name": "TLSTEST", 00:10:55.684 "trtype": "tcp", 00:10:55.684 "traddr": "10.0.0.2", 00:10:55.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:55.685 "adrfam": "ipv4", 00:10:55.685 "trsvcid": "4420", 00:10:55.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.685 "method": "bdev_nvme_attach_controller", 00:10:55.685 "req_id": 1 00:10:55.685 } 00:10:55.685 Got JSON-RPC error response 00:10:55.685 response: 00:10:55.685 { 00:10:55.685 "code": -32602, 00:10:55.685 "message": "Invalid parameters" 00:10:55.685 } 00:10:55.685 state 00:10:55.685 [2024-09-29 00:23:11.294158] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:55.685 [2024-09-29 00:23:11.294192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:55.685 00:23:11 -- target/tls.sh@36 -- # killprocess 64744 00:10:55.685 00:23:11 -- common/autotest_common.sh@926 -- # '[' -z 64744 ']' 00:10:55.685 00:23:11 -- common/autotest_common.sh@930 -- # kill -0 64744 00:10:55.685 00:23:11 -- common/autotest_common.sh@931 -- # uname 00:10:55.685 00:23:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:55.685 00:23:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64744 00:10:55.685 killing process with pid 64744 00:10:55.685 Received shutdown signal, test time was about 10.000000 seconds 00:10:55.685 00:10:55.685 Latency(us) 00:10:55.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.685 =================================================================================================================== 00:10:55.685 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:55.685 00:23:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:55.685 00:23:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:55.685 00:23:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64744' 00:10:55.685 00:23:11 -- common/autotest_common.sh@945 -- # kill 64744 00:10:55.685 00:23:11 -- common/autotest_common.sh@950 -- # wait 64744 00:10:55.685 00:23:11 -- target/tls.sh@37 -- # return 1 00:10:55.685 00:23:11 -- common/autotest_common.sh@643 -- # es=1 00:10:55.685 00:23:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:55.685 00:23:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:55.685 00:23:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:55.685 00:23:11 -- target/tls.sh@167 -- # killprocess 64291 00:10:55.685 00:23:11 -- common/autotest_common.sh@926 -- # '[' -z 64291 ']' 00:10:55.685 00:23:11 -- common/autotest_common.sh@930 -- # kill -0 64291 00:10:55.685 00:23:11 -- common/autotest_common.sh@931 -- # uname 00:10:55.685 00:23:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:55.685 00:23:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64291 00:10:55.943 killing process with pid 64291 00:10:55.943 00:23:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:55.943 00:23:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:55.943 00:23:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64291' 00:10:55.943 00:23:11 -- common/autotest_common.sh@945 -- # kill 64291 00:10:55.943 00:23:11 -- common/autotest_common.sh@950 -- # wait 64291 00:10:55.943 00:23:11 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:10:55.943 00:23:11 -- target/tls.sh@49 -- # local key hash crc 00:10:55.943 00:23:11 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:55.943 00:23:11 -- target/tls.sh@51 -- # hash=02 00:10:55.943 00:23:11 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:10:55.943 00:23:11 -- target/tls.sh@52 -- # gzip -1 -c 00:10:55.943 00:23:11 -- target/tls.sh@52 -- # head -c 4 00:10:55.943 00:23:11 -- target/tls.sh@52 -- # tail -c8 00:10:55.943 00:23:11 -- target/tls.sh@52 -- # crc='�e�'\''' 00:10:55.943 00:23:11 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:55.943 00:23:11 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:10:55.943 00:23:11 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:55.943 00:23:11 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:55.943 00:23:11 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:55.943 00:23:11 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:55.943 00:23:11 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:55.943 00:23:11 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:10:55.943 00:23:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:55.943 00:23:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:55.943 00:23:11 -- common/autotest_common.sh@10 -- # set +x 00:10:55.943 00:23:11 -- nvmf/common.sh@469 -- # nvmfpid=64792 00:10:55.944 00:23:11 -- nvmf/common.sh@470 -- # waitforlisten 64792 00:10:55.944 00:23:11 -- common/autotest_common.sh@819 -- # '[' -z 64792 ']' 00:10:55.944 00:23:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:55.944 00:23:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.944 00:23:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.944 00:23:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.944 00:23:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:55.944 00:23:11 -- common/autotest_common.sh@10 -- # set +x 00:10:56.202 [2024-09-29 00:23:11.821884] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:56.202 [2024-09-29 00:23:11.822215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.202 [2024-09-29 00:23:11.958054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.202 [2024-09-29 00:23:12.010029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:56.202 [2024-09-29 00:23:12.010181] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.202 [2024-09-29 00:23:12.010193] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.202 [2024-09-29 00:23:12.010201] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.202 [2024-09-29 00:23:12.010230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.139 00:23:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:57.139 00:23:12 -- common/autotest_common.sh@852 -- # return 0 00:10:57.139 00:23:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:57.139 00:23:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:57.139 00:23:12 -- common/autotest_common.sh@10 -- # set +x 00:10:57.139 00:23:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.139 00:23:12 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:57.139 00:23:12 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:57.139 00:23:12 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:57.397 [2024-09-29 00:23:13.098539] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.397 00:23:13 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:57.655 00:23:13 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:57.914 [2024-09-29 00:23:13.554619] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:57.914 [2024-09-29 00:23:13.554870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.914 00:23:13 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:58.172 malloc0 00:10:58.172 00:23:13 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:58.431 00:23:14 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:58.690 00:23:14 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:58.690 00:23:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:58.690 00:23:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:58.690 00:23:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:58.690 00:23:14 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:58.690 00:23:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:58.690 00:23:14 -- target/tls.sh@28 -- # bdevperf_pid=64847 00:10:58.690 00:23:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:58.690 00:23:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:58.690 00:23:14 -- target/tls.sh@31 -- # waitforlisten 64847 /var/tmp/bdevperf.sock 00:10:58.690 00:23:14 -- common/autotest_common.sh@819 -- # '[' -z 64847 ']' 00:10:58.690 00:23:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:58.690 00:23:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:58.690 00:23:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:58.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:58.690 00:23:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:58.691 00:23:14 -- common/autotest_common.sh@10 -- # set +x 00:10:58.691 [2024-09-29 00:23:14.357828] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:10:58.691 [2024-09-29 00:23:14.358089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64847 ] 00:10:58.691 [2024-09-29 00:23:14.495207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.949 [2024-09-29 00:23:14.563838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.516 00:23:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:59.516 00:23:15 -- common/autotest_common.sh@852 -- # return 0 00:10:59.516 00:23:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:59.774 [2024-09-29 00:23:15.530827] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:59.775 TLSTESTn1 00:10:59.775 00:23:15 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:00.033 Running I/O for 10 seconds... 00:11:10.009 00:11:10.009 Latency(us) 00:11:10.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.009 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:10.009 Verification LBA range: start 0x0 length 0x2000 00:11:10.009 TLSTESTn1 : 10.01 6239.59 24.37 0.00 0.00 20482.16 4676.89 20018.27 00:11:10.009 =================================================================================================================== 00:11:10.009 Total : 6239.59 24.37 0.00 0.00 20482.16 4676.89 20018.27 00:11:10.009 0 00:11:10.009 00:23:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:10.009 00:23:25 -- target/tls.sh@45 -- # killprocess 64847 00:11:10.009 00:23:25 -- common/autotest_common.sh@926 -- # '[' -z 64847 ']' 00:11:10.009 00:23:25 -- common/autotest_common.sh@930 -- # kill -0 64847 00:11:10.009 00:23:25 -- common/autotest_common.sh@931 -- # uname 00:11:10.009 00:23:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:10.009 00:23:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64847 00:11:10.009 killing process with pid 64847 00:11:10.009 Received shutdown signal, test time was about 10.000000 seconds 00:11:10.009 00:11:10.009 Latency(us) 00:11:10.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.009 =================================================================================================================== 00:11:10.009 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:10.009 00:23:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:10.009 00:23:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:10.009 00:23:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64847' 00:11:10.009 00:23:25 -- common/autotest_common.sh@945 -- # kill 64847 00:11:10.009 00:23:25 -- common/autotest_common.sh@950 -- # wait 64847 00:11:10.269 00:23:25 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.269 00:23:25 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.269 00:23:25 -- common/autotest_common.sh@640 -- # local es=0 00:11:10.269 00:23:25 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.269 00:23:25 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:11:10.269 00:23:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:10.269 00:23:25 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:11:10.269 00:23:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:10.269 00:23:25 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.269 00:23:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:10.269 00:23:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:10.269 00:23:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:10.269 00:23:25 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:10.269 00:23:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:10.269 00:23:25 -- target/tls.sh@28 -- # bdevperf_pid=64980 00:11:10.269 00:23:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:10.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:10.269 00:23:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:10.269 00:23:25 -- target/tls.sh@31 -- # waitforlisten 64980 /var/tmp/bdevperf.sock 00:11:10.269 00:23:25 -- common/autotest_common.sh@819 -- # '[' -z 64980 ']' 00:11:10.269 00:23:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:10.269 00:23:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:10.269 00:23:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:10.269 00:23:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:10.269 00:23:25 -- common/autotest_common.sh@10 -- # set +x 00:11:10.269 [2024-09-29 00:23:26.020626] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:10.269 [2024-09-29 00:23:26.020747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64980 ] 00:11:10.529 [2024-09-29 00:23:26.160435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.529 [2024-09-29 00:23:26.213361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.466 00:23:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:11.466 00:23:27 -- common/autotest_common.sh@852 -- # return 0 00:11:11.466 00:23:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.466 [2024-09-29 00:23:27.204072] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:11.466 [2024-09-29 00:23:27.204134] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:11.466 request: 00:11:11.466 { 00:11:11.466 "name": "TLSTEST", 00:11:11.466 "trtype": "tcp", 00:11:11.466 "traddr": "10.0.0.2", 00:11:11.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:11.466 "adrfam": "ipv4", 00:11:11.466 "trsvcid": "4420", 00:11:11.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:11.466 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:11.466 "method": "bdev_nvme_attach_controller", 00:11:11.466 "req_id": 1 00:11:11.466 } 00:11:11.466 Got JSON-RPC error response 00:11:11.466 response: 00:11:11.466 { 00:11:11.466 "code": -22, 00:11:11.466 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:11.466 } 00:11:11.466 00:23:27 -- target/tls.sh@36 -- # killprocess 64980 00:11:11.466 00:23:27 -- common/autotest_common.sh@926 -- # '[' -z 64980 ']' 00:11:11.466 00:23:27 -- common/autotest_common.sh@930 -- # kill -0 64980 00:11:11.466 00:23:27 -- common/autotest_common.sh@931 -- # uname 00:11:11.466 00:23:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:11.466 00:23:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64980 00:11:11.466 00:23:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:11.466 00:23:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:11.466 00:23:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64980' 00:11:11.466 killing process with pid 64980 00:11:11.466 00:23:27 -- common/autotest_common.sh@945 -- # kill 64980 00:11:11.466 Received shutdown signal, test time was about 10.000000 seconds 00:11:11.466 00:11:11.466 Latency(us) 00:11:11.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.466 =================================================================================================================== 00:11:11.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:11.466 00:23:27 -- common/autotest_common.sh@950 -- # wait 64980 00:11:11.725 00:23:27 -- target/tls.sh@37 -- # return 1 00:11:11.725 00:23:27 -- common/autotest_common.sh@643 -- # es=1 00:11:11.725 00:23:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:11.725 00:23:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:11.725 00:23:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:11.725 00:23:27 -- target/tls.sh@183 -- # killprocess 64792 00:11:11.725 00:23:27 -- common/autotest_common.sh@926 -- # '[' -z 64792 ']' 00:11:11.725 00:23:27 -- common/autotest_common.sh@930 -- # kill -0 64792 00:11:11.725 00:23:27 -- common/autotest_common.sh@931 -- # uname 00:11:11.725 00:23:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:11.725 00:23:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64792 00:11:11.725 killing process with pid 64792 00:11:11.725 00:23:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:11.725 00:23:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:11.725 00:23:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64792' 00:11:11.725 00:23:27 -- common/autotest_common.sh@945 -- # kill 64792 00:11:11.725 00:23:27 -- common/autotest_common.sh@950 -- # wait 64792 00:11:11.984 00:23:27 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:11.984 00:23:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:11.984 00:23:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:11.984 00:23:27 -- common/autotest_common.sh@10 -- # set +x 00:11:11.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.984 00:23:27 -- nvmf/common.sh@469 -- # nvmfpid=65014 00:11:11.984 00:23:27 -- nvmf/common.sh@470 -- # waitforlisten 65014 00:11:11.984 00:23:27 -- common/autotest_common.sh@819 -- # '[' -z 65014 ']' 00:11:11.984 00:23:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.984 00:23:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:11.984 00:23:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.984 00:23:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:11.984 00:23:27 -- common/autotest_common.sh@10 -- # set +x 00:11:11.984 00:23:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:11.984 [2024-09-29 00:23:27.697795] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:11.984 [2024-09-29 00:23:27.697900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.243 [2024-09-29 00:23:27.836771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.243 [2024-09-29 00:23:27.885252] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:12.243 [2024-09-29 00:23:27.885427] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.243 [2024-09-29 00:23:27.885454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.243 [2024-09-29 00:23:27.885462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.243 [2024-09-29 00:23:27.885507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.810 00:23:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:12.810 00:23:28 -- common/autotest_common.sh@852 -- # return 0 00:11:12.810 00:23:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:12.810 00:23:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:12.810 00:23:28 -- common/autotest_common.sh@10 -- # set +x 00:11:12.810 00:23:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.810 00:23:28 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.810 00:23:28 -- common/autotest_common.sh@640 -- # local es=0 00:11:12.810 00:23:28 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.810 00:23:28 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:11:12.810 00:23:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:12.810 00:23:28 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:11:13.069 00:23:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:13.069 00:23:28 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:13.069 00:23:28 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:13.069 00:23:28 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:13.333 [2024-09-29 00:23:28.919216] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.333 00:23:28 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:13.591 00:23:29 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:13.591 [2024-09-29 00:23:29.423347] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:13.591 [2024-09-29 00:23:29.423634] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.850 00:23:29 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:13.850 malloc0 00:11:13.850 00:23:29 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:14.108 00:23:29 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:14.366 [2024-09-29 00:23:30.169890] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:14.366 [2024-09-29 00:23:30.169952] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:14.366 [2024-09-29 00:23:30.169985] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:14.366 request: 00:11:14.366 { 00:11:14.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.366 "host": "nqn.2016-06.io.spdk:host1", 00:11:14.366 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:14.366 "method": "nvmf_subsystem_add_host", 00:11:14.366 "req_id": 1 00:11:14.366 } 00:11:14.366 Got JSON-RPC error response 00:11:14.366 response: 00:11:14.366 { 00:11:14.366 "code": -32603, 00:11:14.366 "message": "Internal error" 00:11:14.366 } 00:11:14.366 00:23:30 -- common/autotest_common.sh@643 -- # es=1 00:11:14.366 00:23:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:14.366 00:23:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:14.366 00:23:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:14.366 00:23:30 -- target/tls.sh@189 -- # killprocess 65014 00:11:14.366 00:23:30 -- common/autotest_common.sh@926 -- # '[' -z 65014 ']' 00:11:14.366 00:23:30 -- common/autotest_common.sh@930 -- # kill -0 65014 00:11:14.366 00:23:30 -- common/autotest_common.sh@931 -- # uname 00:11:14.366 00:23:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:14.366 00:23:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65014 00:11:14.624 killing process with pid 65014 00:11:14.624 00:23:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:14.624 00:23:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:14.624 00:23:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65014' 00:11:14.624 00:23:30 -- common/autotest_common.sh@945 -- # kill 65014 00:11:14.624 00:23:30 -- common/autotest_common.sh@950 -- # wait 65014 00:11:14.624 00:23:30 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:14.624 00:23:30 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:14.624 00:23:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:14.624 00:23:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:14.624 00:23:30 -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 00:23:30 -- nvmf/common.sh@469 -- # nvmfpid=65075 00:11:14.624 00:23:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:14.624 00:23:30 -- nvmf/common.sh@470 -- # waitforlisten 65075 00:11:14.624 00:23:30 -- common/autotest_common.sh@819 -- # '[' -z 65075 ']' 00:11:14.624 00:23:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.624 00:23:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.624 00:23:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.624 00:23:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.624 00:23:30 -- common/autotest_common.sh@10 -- # set +x 00:11:14.883 [2024-09-29 00:23:30.481007] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:14.883 [2024-09-29 00:23:30.481120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.883 [2024-09-29 00:23:30.621773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.883 [2024-09-29 00:23:30.676943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:14.883 [2024-09-29 00:23:30.677093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.883 [2024-09-29 00:23:30.677105] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.883 [2024-09-29 00:23:30.677112] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.883 [2024-09-29 00:23:30.677139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.846 00:23:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:15.846 00:23:31 -- common/autotest_common.sh@852 -- # return 0 00:11:15.846 00:23:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:15.846 00:23:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:15.846 00:23:31 -- common/autotest_common.sh@10 -- # set +x 00:11:15.846 00:23:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.846 00:23:31 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.846 00:23:31 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:15.846 00:23:31 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:16.106 [2024-09-29 00:23:31.719510] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.106 00:23:31 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:16.365 00:23:32 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:16.365 [2024-09-29 00:23:32.203615] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:16.365 [2024-09-29 00:23:32.203860] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.624 00:23:32 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:16.624 malloc0 00:11:16.624 00:23:32 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:16.883 00:23:32 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:17.142 00:23:32 -- target/tls.sh@197 -- # bdevperf_pid=65131 00:11:17.142 00:23:32 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:17.142 00:23:32 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:17.142 00:23:32 -- target/tls.sh@200 -- # waitforlisten 65131 /var/tmp/bdevperf.sock 00:11:17.142 00:23:32 -- common/autotest_common.sh@819 -- # '[' -z 65131 ']' 00:11:17.142 00:23:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:17.142 00:23:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:17.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:17.143 00:23:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:17.143 00:23:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:17.143 00:23:32 -- common/autotest_common.sh@10 -- # set +x 00:11:17.143 [2024-09-29 00:23:32.941858] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:17.143 [2024-09-29 00:23:32.941980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65131 ] 00:11:17.401 [2024-09-29 00:23:33.083621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.401 [2024-09-29 00:23:33.151153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.340 00:23:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:18.340 00:23:33 -- common/autotest_common.sh@852 -- # return 0 00:11:18.340 00:23:33 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:18.340 [2024-09-29 00:23:34.083563] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:18.340 TLSTESTn1 00:11:18.340 00:23:34 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:18.909 00:23:34 -- target/tls.sh@205 -- # tgtconf='{ 00:11:18.909 "subsystems": [ 00:11:18.909 { 00:11:18.909 "subsystem": "iobuf", 00:11:18.909 "config": [ 00:11:18.909 { 00:11:18.909 "method": "iobuf_set_options", 00:11:18.909 "params": { 00:11:18.909 "small_pool_count": 8192, 00:11:18.909 "large_pool_count": 1024, 00:11:18.909 "small_bufsize": 8192, 00:11:18.909 "large_bufsize": 135168 00:11:18.909 } 00:11:18.909 } 00:11:18.909 ] 00:11:18.909 }, 00:11:18.909 { 00:11:18.909 "subsystem": "sock", 00:11:18.909 "config": [ 00:11:18.909 { 00:11:18.909 "method": "sock_impl_set_options", 00:11:18.909 "params": { 00:11:18.909 "impl_name": "uring", 00:11:18.909 "recv_buf_size": 2097152, 00:11:18.909 "send_buf_size": 2097152, 00:11:18.909 "enable_recv_pipe": true, 00:11:18.909 "enable_quickack": false, 00:11:18.909 "enable_placement_id": 0, 00:11:18.909 "enable_zerocopy_send_server": false, 00:11:18.909 "enable_zerocopy_send_client": false, 00:11:18.909 "zerocopy_threshold": 0, 00:11:18.909 "tls_version": 0, 00:11:18.909 "enable_ktls": false 00:11:18.909 } 00:11:18.909 }, 00:11:18.909 { 00:11:18.909 "method": "sock_impl_set_options", 00:11:18.909 "params": { 00:11:18.909 "impl_name": "posix", 00:11:18.909 "recv_buf_size": 2097152, 00:11:18.909 "send_buf_size": 2097152, 00:11:18.909 "enable_recv_pipe": true, 00:11:18.909 "enable_quickack": false, 00:11:18.909 "enable_placement_id": 0, 00:11:18.909 "enable_zerocopy_send_server": true, 00:11:18.909 "enable_zerocopy_send_client": false, 00:11:18.909 "zerocopy_threshold": 0, 00:11:18.909 "tls_version": 0, 00:11:18.909 "enable_ktls": false 00:11:18.909 } 00:11:18.909 }, 00:11:18.909 { 00:11:18.909 "method": "sock_impl_set_options", 00:11:18.909 "params": { 00:11:18.909 "impl_name": "ssl", 00:11:18.909 "recv_buf_size": 4096, 00:11:18.909 "send_buf_size": 4096, 00:11:18.909 "enable_recv_pipe": true, 00:11:18.909 "enable_quickack": false, 00:11:18.909 "enable_placement_id": 0, 00:11:18.909 "enable_zerocopy_send_server": true, 00:11:18.909 "enable_zerocopy_send_client": false, 00:11:18.909 "zerocopy_threshold": 0, 00:11:18.909 "tls_version": 0, 00:11:18.909 "enable_ktls": false 00:11:18.909 } 00:11:18.909 } 00:11:18.909 ] 00:11:18.909 }, 00:11:18.909 { 00:11:18.909 "subsystem": "vmd", 00:11:18.909 "config": [] 00:11:18.909 }, 00:11:18.909 { 00:11:18.909 "subsystem": "accel", 00:11:18.909 "config": [ 00:11:18.909 { 00:11:18.909 "method": "accel_set_options", 00:11:18.909 "params": { 00:11:18.909 "small_cache_size": 128, 00:11:18.909 "large_cache_size": 16, 00:11:18.909 "task_count": 2048, 00:11:18.909 "sequence_count": 2048, 00:11:18.909 "buf_count": 2048 00:11:18.909 } 00:11:18.909 } 00:11:18.909 ] 00:11:18.909 }, 00:11:18.910 { 00:11:18.910 "subsystem": "bdev", 00:11:18.910 "config": [ 00:11:18.910 { 00:11:18.910 "method": "bdev_set_options", 00:11:18.910 "params": { 00:11:18.910 "bdev_io_pool_size": 65535, 00:11:18.910 "bdev_io_cache_size": 256, 00:11:18.910 "bdev_auto_examine": true, 00:11:18.910 "iobuf_small_cache_size": 128, 00:11:18.910 "iobuf_large_cache_size": 16 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "bdev_raid_set_options", 00:11:18.910 "params": { 00:11:18.910 "process_window_size_kb": 1024 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "bdev_iscsi_set_options", 00:11:18.910 "params": { 00:11:18.910 "timeout_sec": 30 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "bdev_nvme_set_options", 00:11:18.910 "params": { 00:11:18.910 "action_on_timeout": "none", 00:11:18.910 "timeout_us": 0, 00:11:18.910 "timeout_admin_us": 0, 00:11:18.910 "keep_alive_timeout_ms": 10000, 00:11:18.910 "transport_retry_count": 4, 00:11:18.910 "arbitration_burst": 0, 00:11:18.910 "low_priority_weight": 0, 00:11:18.910 "medium_priority_weight": 0, 00:11:18.910 "high_priority_weight": 0, 00:11:18.910 "nvme_adminq_poll_period_us": 10000, 00:11:18.910 "nvme_ioq_poll_period_us": 0, 00:11:18.910 "io_queue_requests": 0, 00:11:18.910 "delay_cmd_submit": true, 00:11:18.910 "bdev_retry_count": 3, 00:11:18.910 "transport_ack_timeout": 0, 00:11:18.910 "ctrlr_loss_timeout_sec": 0, 00:11:18.910 "reconnect_delay_sec": 0, 00:11:18.910 "fast_io_fail_timeout_sec": 0, 00:11:18.910 "generate_uuids": false, 00:11:18.910 "transport_tos": 0, 00:11:18.910 "io_path_stat": false, 00:11:18.910 "allow_accel_sequence": false 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "bdev_nvme_set_hotplug", 00:11:18.910 "params": { 00:11:18.910 "period_us": 100000, 00:11:18.910 "enable": false 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "bdev_malloc_create", 00:11:18.910 "params": { 00:11:18.910 "name": "malloc0", 00:11:18.910 "num_blocks": 8192, 00:11:18.910 "block_size": 4096, 00:11:18.910 "physical_block_size": 4096, 00:11:18.910 "uuid": "f6c3b074-708d-405f-a018-a8add8b6b9a4", 00:11:18.910 "optimal_io_boundary": 0 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "bdev_wait_for_examine" 00:11:18.910 } 00:11:18.910 ] 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "subsystem": "nbd", 00:11:18.910 "config": [] 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "subsystem": "scheduler", 00:11:18.910 "config": [ 00:11:18.910 { 00:11:18.910 "method": "framework_set_scheduler", 00:11:18.910 "params": { 00:11:18.910 "name": "static" 00:11:18.910 } 00:11:18.910 } 00:11:18.910 ] 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "subsystem": "nvmf", 00:11:18.910 "config": [ 00:11:18.910 { 00:11:18.910 "method": "nvmf_set_config", 00:11:18.910 "params": { 00:11:18.910 "discovery_filter": "match_any", 00:11:18.910 "admin_cmd_passthru": { 00:11:18.910 "identify_ctrlr": false 00:11:18.910 } 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_set_max_subsystems", 00:11:18.910 "params": { 00:11:18.910 "max_subsystems": 1024 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_set_crdt", 00:11:18.910 "params": { 00:11:18.910 "crdt1": 0, 00:11:18.910 "crdt2": 0, 00:11:18.910 "crdt3": 0 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_create_transport", 00:11:18.910 "params": { 00:11:18.910 "trtype": "TCP", 00:11:18.910 "max_queue_depth": 128, 00:11:18.910 "max_io_qpairs_per_ctrlr": 127, 00:11:18.910 "in_capsule_data_size": 4096, 00:11:18.910 "max_io_size": 131072, 00:11:18.910 "io_unit_size": 131072, 00:11:18.910 "max_aq_depth": 128, 00:11:18.910 "num_shared_buffers": 511, 00:11:18.910 "buf_cache_size": 4294967295, 00:11:18.910 "dif_insert_or_strip": false, 00:11:18.910 "zcopy": false, 00:11:18.910 "c2h_success": false, 00:11:18.910 "sock_priority": 0, 00:11:18.910 "abort_timeout_sec": 1 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_create_subsystem", 00:11:18.910 "params": { 00:11:18.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.910 "allow_any_host": false, 00:11:18.910 "serial_number": "SPDK00000000000001", 00:11:18.910 "model_number": "SPDK bdev Controller", 00:11:18.910 "max_namespaces": 10, 00:11:18.910 "min_cntlid": 1, 00:11:18.910 "max_cntlid": 65519, 00:11:18.910 "ana_reporting": false 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_subsystem_add_host", 00:11:18.910 "params": { 00:11:18.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.910 "host": "nqn.2016-06.io.spdk:host1", 00:11:18.910 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_subsystem_add_ns", 00:11:18.910 "params": { 00:11:18.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.910 "namespace": { 00:11:18.910 "nsid": 1, 00:11:18.910 "bdev_name": "malloc0", 00:11:18.910 "nguid": "F6C3B074708D405FA018A8ADD8B6B9A4", 00:11:18.910 "uuid": "f6c3b074-708d-405f-a018-a8add8b6b9a4" 00:11:18.910 } 00:11:18.910 } 00:11:18.910 }, 00:11:18.910 { 00:11:18.910 "method": "nvmf_subsystem_add_listener", 00:11:18.910 "params": { 00:11:18.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.910 "listen_address": { 00:11:18.910 "trtype": "TCP", 00:11:18.910 "adrfam": "IPv4", 00:11:18.910 "traddr": "10.0.0.2", 00:11:18.910 "trsvcid": "4420" 00:11:18.910 }, 00:11:18.910 "secure_channel": true 00:11:18.910 } 00:11:18.910 } 00:11:18.910 ] 00:11:18.910 } 00:11:18.910 ] 00:11:18.910 }' 00:11:18.910 00:23:34 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:19.170 00:23:34 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:19.170 "subsystems": [ 00:11:19.170 { 00:11:19.170 "subsystem": "iobuf", 00:11:19.170 "config": [ 00:11:19.170 { 00:11:19.170 "method": "iobuf_set_options", 00:11:19.170 "params": { 00:11:19.170 "small_pool_count": 8192, 00:11:19.170 "large_pool_count": 1024, 00:11:19.170 "small_bufsize": 8192, 00:11:19.170 "large_bufsize": 135168 00:11:19.170 } 00:11:19.170 } 00:11:19.170 ] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "subsystem": "sock", 00:11:19.170 "config": [ 00:11:19.170 { 00:11:19.170 "method": "sock_impl_set_options", 00:11:19.170 "params": { 00:11:19.170 "impl_name": "uring", 00:11:19.170 "recv_buf_size": 2097152, 00:11:19.170 "send_buf_size": 2097152, 00:11:19.170 "enable_recv_pipe": true, 00:11:19.170 "enable_quickack": false, 00:11:19.170 "enable_placement_id": 0, 00:11:19.170 "enable_zerocopy_send_server": false, 00:11:19.170 "enable_zerocopy_send_client": false, 00:11:19.170 "zerocopy_threshold": 0, 00:11:19.170 "tls_version": 0, 00:11:19.170 "enable_ktls": false 00:11:19.170 } 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "method": "sock_impl_set_options", 00:11:19.170 "params": { 00:11:19.170 "impl_name": "posix", 00:11:19.170 "recv_buf_size": 2097152, 00:11:19.170 "send_buf_size": 2097152, 00:11:19.170 "enable_recv_pipe": true, 00:11:19.170 "enable_quickack": false, 00:11:19.170 "enable_placement_id": 0, 00:11:19.170 "enable_zerocopy_send_server": true, 00:11:19.170 "enable_zerocopy_send_client": false, 00:11:19.170 "zerocopy_threshold": 0, 00:11:19.170 "tls_version": 0, 00:11:19.170 "enable_ktls": false 00:11:19.170 } 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "method": "sock_impl_set_options", 00:11:19.170 "params": { 00:11:19.170 "impl_name": "ssl", 00:11:19.170 "recv_buf_size": 4096, 00:11:19.170 "send_buf_size": 4096, 00:11:19.170 "enable_recv_pipe": true, 00:11:19.170 "enable_quickack": false, 00:11:19.170 "enable_placement_id": 0, 00:11:19.170 "enable_zerocopy_send_server": true, 00:11:19.170 "enable_zerocopy_send_client": false, 00:11:19.170 "zerocopy_threshold": 0, 00:11:19.170 "tls_version": 0, 00:11:19.170 "enable_ktls": false 00:11:19.170 } 00:11:19.170 } 00:11:19.170 ] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "subsystem": "vmd", 00:11:19.170 "config": [] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "subsystem": "accel", 00:11:19.170 "config": [ 00:11:19.170 { 00:11:19.170 "method": "accel_set_options", 00:11:19.170 "params": { 00:11:19.170 "small_cache_size": 128, 00:11:19.170 "large_cache_size": 16, 00:11:19.170 "task_count": 2048, 00:11:19.170 "sequence_count": 2048, 00:11:19.170 "buf_count": 2048 00:11:19.170 } 00:11:19.170 } 00:11:19.170 ] 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "subsystem": "bdev", 00:11:19.170 "config": [ 00:11:19.170 { 00:11:19.170 "method": "bdev_set_options", 00:11:19.170 "params": { 00:11:19.170 "bdev_io_pool_size": 65535, 00:11:19.170 "bdev_io_cache_size": 256, 00:11:19.170 "bdev_auto_examine": true, 00:11:19.170 "iobuf_small_cache_size": 128, 00:11:19.170 "iobuf_large_cache_size": 16 00:11:19.170 } 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "method": "bdev_raid_set_options", 00:11:19.170 "params": { 00:11:19.170 "process_window_size_kb": 1024 00:11:19.170 } 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "method": "bdev_iscsi_set_options", 00:11:19.170 "params": { 00:11:19.170 "timeout_sec": 30 00:11:19.170 } 00:11:19.170 }, 00:11:19.170 { 00:11:19.170 "method": "bdev_nvme_set_options", 00:11:19.170 "params": { 00:11:19.170 "action_on_timeout": "none", 00:11:19.170 "timeout_us": 0, 00:11:19.170 "timeout_admin_us": 0, 00:11:19.170 "keep_alive_timeout_ms": 10000, 00:11:19.170 "transport_retry_count": 4, 00:11:19.170 "arbitration_burst": 0, 00:11:19.170 "low_priority_weight": 0, 00:11:19.170 "medium_priority_weight": 0, 00:11:19.170 "high_priority_weight": 0, 00:11:19.170 "nvme_adminq_poll_period_us": 10000, 00:11:19.170 "nvme_ioq_poll_period_us": 0, 00:11:19.170 "io_queue_requests": 512, 00:11:19.170 "delay_cmd_submit": true, 00:11:19.170 "bdev_retry_count": 3, 00:11:19.170 "transport_ack_timeout": 0, 00:11:19.170 "ctrlr_loss_timeout_sec": 0, 00:11:19.171 "reconnect_delay_sec": 0, 00:11:19.171 "fast_io_fail_timeout_sec": 0, 00:11:19.171 "generate_uuids": false, 00:11:19.171 "transport_tos": 0, 00:11:19.171 "io_path_stat": false, 00:11:19.171 "allow_accel_sequence": false 00:11:19.171 } 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "method": "bdev_nvme_attach_controller", 00:11:19.171 "params": { 00:11:19.171 "name": "TLSTEST", 00:11:19.171 "trtype": "TCP", 00:11:19.171 "adrfam": "IPv4", 00:11:19.171 "traddr": "10.0.0.2", 00:11:19.171 "trsvcid": "4420", 00:11:19.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.171 "prchk_reftag": false, 00:11:19.171 "prchk_guard": false, 00:11:19.171 "ctrlr_loss_timeout_sec": 0, 00:11:19.171 "reconnect_delay_sec": 0, 00:11:19.171 "fast_io_fail_timeout_sec": 0, 00:11:19.171 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:19.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.171 "hdgst": false, 00:11:19.171 "ddgst": false 00:11:19.171 } 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "method": "bdev_nvme_set_hotplug", 00:11:19.171 "params": { 00:11:19.171 "period_us": 100000, 00:11:19.171 "enable": false 00:11:19.171 } 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "method": "bdev_wait_for_examine" 00:11:19.171 } 00:11:19.171 ] 00:11:19.171 }, 00:11:19.171 { 00:11:19.171 "subsystem": "nbd", 00:11:19.171 "config": [] 00:11:19.171 } 00:11:19.171 ] 00:11:19.171 }' 00:11:19.171 00:23:34 -- target/tls.sh@208 -- # killprocess 65131 00:11:19.171 00:23:34 -- common/autotest_common.sh@926 -- # '[' -z 65131 ']' 00:11:19.171 00:23:34 -- common/autotest_common.sh@930 -- # kill -0 65131 00:11:19.171 00:23:34 -- common/autotest_common.sh@931 -- # uname 00:11:19.171 00:23:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:19.171 00:23:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65131 00:11:19.171 00:23:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:19.171 killing process with pid 65131 00:11:19.171 00:23:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:19.171 00:23:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65131' 00:11:19.171 00:23:34 -- common/autotest_common.sh@945 -- # kill 65131 00:11:19.171 00:23:34 -- common/autotest_common.sh@950 -- # wait 65131 00:11:19.171 Received shutdown signal, test time was about 10.000000 seconds 00:11:19.171 00:11:19.171 Latency(us) 00:11:19.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.171 =================================================================================================================== 00:11:19.171 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:19.171 00:23:35 -- target/tls.sh@209 -- # killprocess 65075 00:11:19.171 00:23:35 -- common/autotest_common.sh@926 -- # '[' -z 65075 ']' 00:11:19.171 00:23:35 -- common/autotest_common.sh@930 -- # kill -0 65075 00:11:19.171 00:23:35 -- common/autotest_common.sh@931 -- # uname 00:11:19.171 00:23:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:19.430 00:23:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65075 00:11:19.430 00:23:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:19.430 killing process with pid 65075 00:11:19.430 00:23:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:19.430 00:23:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65075' 00:11:19.430 00:23:35 -- common/autotest_common.sh@945 -- # kill 65075 00:11:19.430 00:23:35 -- common/autotest_common.sh@950 -- # wait 65075 00:11:19.430 00:23:35 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:19.430 00:23:35 -- target/tls.sh@212 -- # echo '{ 00:11:19.430 "subsystems": [ 00:11:19.430 { 00:11:19.430 "subsystem": "iobuf", 00:11:19.430 "config": [ 00:11:19.430 { 00:11:19.430 "method": "iobuf_set_options", 00:11:19.430 "params": { 00:11:19.430 "small_pool_count": 8192, 00:11:19.430 "large_pool_count": 1024, 00:11:19.430 "small_bufsize": 8192, 00:11:19.430 "large_bufsize": 135168 00:11:19.430 } 00:11:19.430 } 00:11:19.430 ] 00:11:19.430 }, 00:11:19.430 { 00:11:19.430 "subsystem": "sock", 00:11:19.430 "config": [ 00:11:19.430 { 00:11:19.430 "method": "sock_impl_set_options", 00:11:19.430 "params": { 00:11:19.430 "impl_name": "uring", 00:11:19.430 "recv_buf_size": 2097152, 00:11:19.430 "send_buf_size": 2097152, 00:11:19.430 "enable_recv_pipe": true, 00:11:19.430 "enable_quickack": false, 00:11:19.430 "enable_placement_id": 0, 00:11:19.430 "enable_zerocopy_send_server": false, 00:11:19.430 "enable_zerocopy_send_client": false, 00:11:19.430 "zerocopy_threshold": 0, 00:11:19.430 "tls_version": 0, 00:11:19.431 "enable_ktls": false 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "sock_impl_set_options", 00:11:19.431 "params": { 00:11:19.431 "impl_name": "posix", 00:11:19.431 "recv_buf_size": 2097152, 00:11:19.431 "send_buf_size": 2097152, 00:11:19.431 "enable_recv_pipe": true, 00:11:19.431 "enable_quickack": false, 00:11:19.431 "enable_placement_id": 0, 00:11:19.431 "enable_zerocopy_send_server": true, 00:11:19.431 "enable_zerocopy_send_client": false, 00:11:19.431 "zerocopy_threshold": 0, 00:11:19.431 "tls_version": 0, 00:11:19.431 "enable_ktls": false 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "sock_impl_set_options", 00:11:19.431 "params": { 00:11:19.431 "impl_name": "ssl", 00:11:19.431 "recv_buf_size": 4096, 00:11:19.431 "send_buf_size": 4096, 00:11:19.431 "enable_recv_pipe": true, 00:11:19.431 "enable_quickack": false, 00:11:19.431 "enable_placement_id": 0, 00:11:19.431 "enable_zerocopy_send_server": true, 00:11:19.431 "enable_zerocopy_send_client": false, 00:11:19.431 "zerocopy_threshold": 0, 00:11:19.431 "tls_version": 0, 00:11:19.431 "enable_ktls": false 00:11:19.431 } 00:11:19.431 } 00:11:19.431 ] 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "subsystem": "vmd", 00:11:19.431 "config": [] 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "subsystem": "accel", 00:11:19.431 "config": [ 00:11:19.431 { 00:11:19.431 "method": "accel_set_options", 00:11:19.431 "params": { 00:11:19.431 "small_cache_size": 128, 00:11:19.431 "large_cache_size": 16, 00:11:19.431 "task_count": 2048, 00:11:19.431 "sequence_count": 2048, 00:11:19.431 "buf_count": 2048 00:11:19.431 } 00:11:19.431 } 00:11:19.431 ] 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "subsystem": "bdev", 00:11:19.431 "config": [ 00:11:19.431 { 00:11:19.431 "method": "bdev_set_options", 00:11:19.431 "params": { 00:11:19.431 "bdev_io_pool_size": 65535, 00:11:19.431 "bdev_io_cache_size": 256, 00:11:19.431 "bdev_auto_examine": true, 00:11:19.431 "iobuf_small_cache_size": 128, 00:11:19.431 "iobuf_large_cache_size": 16 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "bdev_raid_set_options", 00:11:19.431 "params": { 00:11:19.431 "process_window_size_kb": 1024 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "bdev_iscsi_set_options", 00:11:19.431 "params": { 00:11:19.431 "timeout_sec": 30 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "bdev_nvme_set_options", 00:11:19.431 "params": { 00:11:19.431 "action_on_timeout": "none", 00:11:19.431 "timeout_us": 0, 00:11:19.431 "timeout_admin_us": 0, 00:11:19.431 "keep_alive_timeout_ms": 10000, 00:11:19.431 "transport_retry_count": 4, 00:11:19.431 "arbitration_burst": 0, 00:11:19.431 "low_priority_weight": 0, 00:11:19.431 "medium_priority_weight": 0, 00:11:19.431 "high_priority_weight": 0, 00:11:19.431 "nvme_adminq_poll_period_us": 10000, 00:11:19.431 "nvme_ioq_poll_period_us": 0, 00:11:19.431 "io_queue_requests": 0, 00:11:19.431 "delay_cmd_submit": true, 00:11:19.431 "bdev_retry_count": 3, 00:11:19.431 "transport_ack_timeout": 0, 00:11:19.431 "ctrlr_loss_timeout_sec": 0, 00:11:19.431 "reconnect_delay_sec": 0, 00:11:19.431 "fast_io_fail_timeout_sec": 0, 00:11:19.431 "generate_uuids": false, 00:11:19.431 "transport_tos": 0, 00:11:19.431 "io_path_stat": false, 00:11:19.431 "allow_accel_sequence": false 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "bdev_nvme_set_hotplug", 00:11:19.431 "params": { 00:11:19.431 "period_us": 100000, 00:11:19.431 "enable": false 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "bdev_malloc_create", 00:11:19.431 "params": { 00:11:19.431 "name": "malloc0", 00:11:19.431 "num_blocks": 8192, 00:11:19.431 "block_size": 4096, 00:11:19.431 "physical_block_size": 4096, 00:11:19.431 "uuid": "f6c3b074-708d-405f-a018-a8add8b6b9a4", 00:11:19.431 "optimal_io_boundary": 0 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "bdev_wait_for_examine" 00:11:19.431 } 00:11:19.431 ] 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "subsystem": "nbd", 00:11:19.431 "config": [] 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "subsystem": "scheduler", 00:11:19.431 "config": [ 00:11:19.431 { 00:11:19.431 "method": "framework_set_scheduler", 00:11:19.431 "params": { 00:11:19.431 "name": "static" 00:11:19.431 } 00:11:19.431 } 00:11:19.431 ] 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "subsystem": "nvmf", 00:11:19.431 "config": [ 00:11:19.431 { 00:11:19.431 "method": "nvmf_set_config", 00:11:19.431 "params": { 00:11:19.431 "discovery_filter": "match_any", 00:11:19.431 "admin_cmd_passthru": { 00:11:19.431 "identify_ctrlr": false 00:11:19.431 } 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_set_max_subsystems", 00:11:19.431 "params": { 00:11:19.431 "max_subsystems": 1024 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_set_crdt", 00:11:19.431 "params": { 00:11:19.431 "crdt1": 0, 00:11:19.431 "crdt2": 0, 00:11:19.431 "crdt3": 0 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_create_transport", 00:11:19.431 "params": { 00:11:19.431 "trtype": "TCP", 00:11:19.431 "max_queue_depth": 128, 00:11:19.431 "max_io_qpairs_per_ctrlr": 127, 00:11:19.431 "in_capsule_data_size": 4096, 00:11:19.431 "max_io_size": 131072, 00:11:19.431 "io_unit_size": 131072, 00:11:19.431 "max_aq_depth": 128, 00:11:19.431 "num_shared_buffers": 511, 00:11:19.431 "buf_cache_size": 4294967295, 00:11:19.431 "dif_insert_or_strip": false, 00:11:19.431 "zcopy": false, 00:11:19.431 "c2h_success": false, 00:11:19.431 "sock_priority": 0, 00:11:19.431 "abort_timeout_sec": 1 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_create_subsystem", 00:11:19.431 "params": { 00:11:19.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.431 "allow_any_host": false, 00:11:19.431 "serial_number": "SPDK00000000000001", 00:11:19.431 "model_number": "SPDK bdev Controller", 00:11:19.431 "max_namespaces": 10, 00:11:19.431 "min_cntlid": 1, 00:11:19.431 "max_cntlid": 65519, 00:11:19.431 "ana_reporting": false 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_subsystem_add_host", 00:11:19.431 "params": { 00:11:19.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.431 "host": "nqn.2016-06.io.spdk:host1", 00:11:19.431 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_subsystem_add_ns", 00:11:19.431 "params": { 00:11:19.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.431 "namespace": { 00:11:19.431 "nsid": 1, 00:11:19.431 "bdev_name": "malloc0", 00:11:19.431 "nguid": "F6C3B074708D405FA018A8ADD8B6B9A4", 00:11:19.431 "uuid": "f6c3b074-708d-405f-a018-a8add8b6b9a4" 00:11:19.431 } 00:11:19.431 } 00:11:19.431 }, 00:11:19.431 { 00:11:19.431 "method": "nvmf_subsystem_add_listener", 00:11:19.431 "params": { 00:11:19.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.431 "listen_address": { 00:11:19.431 "trtype": "TCP", 00:11:19.431 "adrfam": "IPv4", 00:11:19.431 "traddr": "10.0.0.2", 00:11:19.431 "trsvcid": "4420" 00:11:19.431 }, 00:11:19.431 "secure_channel": true 00:11:19.432 } 00:11:19.432 } 00:11:19.432 ] 00:11:19.432 } 00:11:19.432 ] 00:11:19.432 }' 00:11:19.432 00:23:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:19.432 00:23:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:19.432 00:23:35 -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 00:23:35 -- nvmf/common.sh@469 -- # nvmfpid=65174 00:11:19.432 00:23:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:19.432 00:23:35 -- nvmf/common.sh@470 -- # waitforlisten 65174 00:11:19.432 00:23:35 -- common/autotest_common.sh@819 -- # '[' -z 65174 ']' 00:11:19.432 00:23:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.432 00:23:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:19.432 00:23:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.432 00:23:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:19.432 00:23:35 -- common/autotest_common.sh@10 -- # set +x 00:11:19.691 [2024-09-29 00:23:35.289785] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:19.691 [2024-09-29 00:23:35.290138] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.691 [2024-09-29 00:23:35.430007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.691 [2024-09-29 00:23:35.481170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:19.691 [2024-09-29 00:23:35.481644] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.691 [2024-09-29 00:23:35.481667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.691 [2024-09-29 00:23:35.481677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.691 [2024-09-29 00:23:35.481712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.950 [2024-09-29 00:23:35.662857] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.950 [2024-09-29 00:23:35.694838] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:19.950 [2024-09-29 00:23:35.695038] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.517 00:23:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:20.517 00:23:36 -- common/autotest_common.sh@852 -- # return 0 00:11:20.517 00:23:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:20.517 00:23:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:20.517 00:23:36 -- common/autotest_common.sh@10 -- # set +x 00:11:20.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:20.517 00:23:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.517 00:23:36 -- target/tls.sh@216 -- # bdevperf_pid=65206 00:11:20.517 00:23:36 -- target/tls.sh@217 -- # waitforlisten 65206 /var/tmp/bdevperf.sock 00:11:20.517 00:23:36 -- common/autotest_common.sh@819 -- # '[' -z 65206 ']' 00:11:20.517 00:23:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:20.517 00:23:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:20.517 00:23:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:20.517 00:23:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:20.517 00:23:36 -- target/tls.sh@213 -- # echo '{ 00:11:20.517 "subsystems": [ 00:11:20.517 { 00:11:20.517 "subsystem": "iobuf", 00:11:20.517 "config": [ 00:11:20.517 { 00:11:20.517 "method": "iobuf_set_options", 00:11:20.517 "params": { 00:11:20.517 "small_pool_count": 8192, 00:11:20.517 "large_pool_count": 1024, 00:11:20.517 "small_bufsize": 8192, 00:11:20.517 "large_bufsize": 135168 00:11:20.517 } 00:11:20.517 } 00:11:20.517 ] 00:11:20.517 }, 00:11:20.517 { 00:11:20.517 "subsystem": "sock", 00:11:20.517 "config": [ 00:11:20.517 { 00:11:20.517 "method": "sock_impl_set_options", 00:11:20.517 "params": { 00:11:20.517 "impl_name": "uring", 00:11:20.517 "recv_buf_size": 2097152, 00:11:20.517 "send_buf_size": 2097152, 00:11:20.517 "enable_recv_pipe": true, 00:11:20.517 "enable_quickack": false, 00:11:20.517 "enable_placement_id": 0, 00:11:20.517 "enable_zerocopy_send_server": false, 00:11:20.517 "enable_zerocopy_send_client": false, 00:11:20.517 "zerocopy_threshold": 0, 00:11:20.517 "tls_version": 0, 00:11:20.517 "enable_ktls": false 00:11:20.517 } 00:11:20.517 }, 00:11:20.517 { 00:11:20.517 "method": "sock_impl_set_options", 00:11:20.517 "params": { 00:11:20.517 "impl_name": "posix", 00:11:20.517 "recv_buf_size": 2097152, 00:11:20.517 "send_buf_size": 2097152, 00:11:20.517 "enable_recv_pipe": true, 00:11:20.517 "enable_quickack": false, 00:11:20.517 "enable_placement_id": 0, 00:11:20.517 "enable_zerocopy_send_server": true, 00:11:20.517 "enable_zerocopy_send_client": false, 00:11:20.517 "zerocopy_threshold": 0, 00:11:20.517 "tls_version": 0, 00:11:20.517 "enable_ktls": false 00:11:20.517 } 00:11:20.517 }, 00:11:20.517 { 00:11:20.517 "method": "sock_impl_set_options", 00:11:20.517 "params": { 00:11:20.517 "impl_name": "ssl", 00:11:20.517 "recv_buf_size": 4096, 00:11:20.517 "send_buf_size": 4096, 00:11:20.517 "enable_recv_pipe": true, 00:11:20.517 "enable_quickack": false, 00:11:20.517 "enable_placement_id": 0, 00:11:20.518 "enable_zerocopy_send_server": true, 00:11:20.518 "enable_zerocopy_send_client": false, 00:11:20.518 "zerocopy_threshold": 0, 00:11:20.518 "tls_version": 0, 00:11:20.518 "enable_ktls": false 00:11:20.518 } 00:11:20.518 } 00:11:20.518 ] 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "subsystem": "vmd", 00:11:20.518 "config": [] 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "subsystem": "accel", 00:11:20.518 "config": [ 00:11:20.518 { 00:11:20.518 "method": "accel_set_options", 00:11:20.518 "params": { 00:11:20.518 "small_cache_size": 128, 00:11:20.518 "large_cache_size": 16, 00:11:20.518 "task_count": 2048, 00:11:20.518 "sequence_count": 2048, 00:11:20.518 "buf_count": 2048 00:11:20.518 } 00:11:20.518 } 00:11:20.518 ] 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "subsystem": "bdev", 00:11:20.518 "config": [ 00:11:20.518 { 00:11:20.518 "method": "bdev_set_options", 00:11:20.518 "params": { 00:11:20.518 "bdev_io_pool_size": 65535, 00:11:20.518 "bdev_io_cache_size": 256, 00:11:20.518 "bdev_auto_examine": true, 00:11:20.518 "iobuf_small_cache_size": 128, 00:11:20.518 "iobuf_large_cache_size": 16 00:11:20.518 } 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "method": "bdev_raid_set_options", 00:11:20.518 "params": { 00:11:20.518 "process_window_size_kb": 1024 00:11:20.518 } 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "method": "bdev_iscsi_set_options", 00:11:20.518 "params": { 00:11:20.518 "timeout_sec": 30 00:11:20.518 } 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "method": "bdev_nvme_set_options", 00:11:20.518 "params": { 00:11:20.518 "action_on_timeout": "none", 00:11:20.518 "timeout_us": 0, 00:11:20.518 "timeout_admin_us": 0, 00:11:20.518 "keep_alive_timeout_ms": 10000, 00:11:20.518 "transport_retry_count": 4, 00:11:20.518 "arbitration_burst": 0, 00:11:20.518 "low_priority_weight": 0, 00:11:20.518 "medium_priority_weight": 0, 00:11:20.518 "high_priority_weight": 0, 00:11:20.518 "nvme_adminq_poll_period_us": 10000, 00:11:20.518 "nvme_ioq_poll_period_us": 0, 00:11:20.518 "io_queue_requests": 512, 00:11:20.518 "delay_cmd_submit": true, 00:11:20.518 "bdev_retry_count": 3, 00:11:20.518 "transport_ack_timeout": 0, 00:11:20.518 "ctrlr_loss_timeout_sec": 0, 00:11:20.518 "reconnect_delay_sec": 0, 00:11:20.518 "fast_io_fail_timeout_sec": 0, 00:11:20.518 "generate_uuids": false, 00:11:20.518 "transport_tos": 0, 00:11:20.518 "io_path_stat": false, 00:11:20.518 "allow_accel_sequence": false 00:11:20.518 } 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "method": "bdev_nvme_attach_controller", 00:11:20.518 "params": { 00:11:20.518 "name": "TLSTEST", 00:11:20.518 "trtype": "TCP", 00:11:20.518 "adrfam": "IPv4", 00:11:20.518 "traddr": "10.0.0.2", 00:11:20.518 "trsvcid": "4420", 00:11:20.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.518 "prchk_reftag": false, 00:11:20.518 "prchk_guard": false, 00:11:20.518 "ctrlr_loss_timeout_sec": 0, 00:11:20.518 "reconnect_delay_sec": 0, 00:11:20.518 "fast_io_fail_timeout_sec": 0, 00:11:20.518 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:20.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:20.518 "hdgst": false, 00:11:20.518 "ddgst": false 00:11:20.518 } 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "method": "bdev_nvme_set_hotplug", 00:11:20.518 "params": { 00:11:20.518 "period_us": 100000, 00:11:20.518 "enable": false 00:11:20.518 } 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "method": "bdev_wait_for_examine" 00:11:20.518 } 00:11:20.518 ] 00:11:20.518 }, 00:11:20.518 { 00:11:20.518 "subsystem": "nbd", 00:11:20.518 "config": [] 00:11:20.518 } 00:11:20.518 ] 00:11:20.518 }' 00:11:20.518 00:23:36 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:20.518 00:23:36 -- common/autotest_common.sh@10 -- # set +x 00:11:20.518 [2024-09-29 00:23:36.302549] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:20.518 [2024-09-29 00:23:36.302860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65206 ] 00:11:20.778 [2024-09-29 00:23:36.442357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.778 [2024-09-29 00:23:36.495993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.778 [2024-09-29 00:23:36.616294] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:21.347 00:23:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:21.347 00:23:37 -- common/autotest_common.sh@852 -- # return 0 00:11:21.347 00:23:37 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:21.606 Running I/O for 10 seconds... 00:11:31.588 00:11:31.588 Latency(us) 00:11:31.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.588 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:31.588 Verification LBA range: start 0x0 length 0x2000 00:11:31.588 TLSTESTn1 : 10.01 6310.92 24.65 0.00 0.00 20248.46 5510.98 23473.80 00:11:31.588 =================================================================================================================== 00:11:31.588 Total : 6310.92 24.65 0.00 0.00 20248.46 5510.98 23473.80 00:11:31.588 0 00:11:31.588 00:23:47 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.588 00:23:47 -- target/tls.sh@223 -- # killprocess 65206 00:11:31.588 00:23:47 -- common/autotest_common.sh@926 -- # '[' -z 65206 ']' 00:11:31.588 00:23:47 -- common/autotest_common.sh@930 -- # kill -0 65206 00:11:31.588 00:23:47 -- common/autotest_common.sh@931 -- # uname 00:11:31.588 00:23:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:31.588 00:23:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65206 00:11:31.588 killing process with pid 65206 00:11:31.588 Received shutdown signal, test time was about 10.000000 seconds 00:11:31.588 00:11:31.588 Latency(us) 00:11:31.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.588 =================================================================================================================== 00:11:31.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.588 00:23:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:31.588 00:23:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:31.588 00:23:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65206' 00:11:31.588 00:23:47 -- common/autotest_common.sh@945 -- # kill 65206 00:11:31.588 00:23:47 -- common/autotest_common.sh@950 -- # wait 65206 00:11:31.847 00:23:47 -- target/tls.sh@224 -- # killprocess 65174 00:11:31.847 00:23:47 -- common/autotest_common.sh@926 -- # '[' -z 65174 ']' 00:11:31.847 00:23:47 -- common/autotest_common.sh@930 -- # kill -0 65174 00:11:31.847 00:23:47 -- common/autotest_common.sh@931 -- # uname 00:11:31.847 00:23:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:31.847 00:23:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65174 00:11:31.847 killing process with pid 65174 00:11:31.847 00:23:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:31.847 00:23:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:31.847 00:23:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65174' 00:11:31.847 00:23:47 -- common/autotest_common.sh@945 -- # kill 65174 00:11:31.847 00:23:47 -- common/autotest_common.sh@950 -- # wait 65174 00:11:32.106 00:23:47 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:11:32.106 00:23:47 -- target/tls.sh@227 -- # cleanup 00:11:32.106 00:23:47 -- target/tls.sh@15 -- # process_shm --id 0 00:11:32.106 00:23:47 -- common/autotest_common.sh@796 -- # type=--id 00:11:32.106 00:23:47 -- common/autotest_common.sh@797 -- # id=0 00:11:32.106 00:23:47 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:11:32.106 00:23:47 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:32.106 00:23:47 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:11:32.106 00:23:47 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:11:32.106 00:23:47 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:11:32.106 00:23:47 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:32.106 nvmf_trace.0 00:11:32.107 00:23:47 -- common/autotest_common.sh@811 -- # return 0 00:11:32.107 00:23:47 -- target/tls.sh@16 -- # killprocess 65206 00:11:32.107 00:23:47 -- common/autotest_common.sh@926 -- # '[' -z 65206 ']' 00:11:32.107 00:23:47 -- common/autotest_common.sh@930 -- # kill -0 65206 00:11:32.107 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65206) - No such process 00:11:32.107 Process with pid 65206 is not found 00:11:32.107 00:23:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65206 is not found' 00:11:32.107 00:23:47 -- target/tls.sh@17 -- # nvmftestfini 00:11:32.107 00:23:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:32.107 00:23:47 -- nvmf/common.sh@116 -- # sync 00:11:32.107 00:23:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:32.107 00:23:47 -- nvmf/common.sh@119 -- # set +e 00:11:32.107 00:23:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:32.107 00:23:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:32.107 rmmod nvme_tcp 00:11:32.107 rmmod nvme_fabrics 00:11:32.107 rmmod nvme_keyring 00:11:32.107 00:23:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:32.107 00:23:47 -- nvmf/common.sh@123 -- # set -e 00:11:32.107 00:23:47 -- nvmf/common.sh@124 -- # return 0 00:11:32.107 Process with pid 65174 is not found 00:11:32.107 00:23:47 -- nvmf/common.sh@477 -- # '[' -n 65174 ']' 00:11:32.107 00:23:47 -- nvmf/common.sh@478 -- # killprocess 65174 00:11:32.107 00:23:47 -- common/autotest_common.sh@926 -- # '[' -z 65174 ']' 00:11:32.107 00:23:47 -- common/autotest_common.sh@930 -- # kill -0 65174 00:11:32.107 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (65174) - No such process 00:11:32.107 00:23:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 65174 is not found' 00:11:32.107 00:23:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:32.107 00:23:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:32.107 00:23:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:32.107 00:23:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.107 00:23:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:32.107 00:23:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.107 00:23:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.107 00:23:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.366 00:23:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:32.366 00:23:47 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:32.366 00:11:32.366 real 1m10.512s 00:11:32.366 user 1m49.710s 00:11:32.366 sys 0m23.519s 00:11:32.366 00:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.366 00:23:47 -- common/autotest_common.sh@10 -- # set +x 00:11:32.366 ************************************ 00:11:32.366 END TEST nvmf_tls 00:11:32.366 ************************************ 00:11:32.366 00:23:48 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:32.366 00:23:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:32.366 00:23:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.366 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:11:32.366 ************************************ 00:11:32.366 START TEST nvmf_fips 00:11:32.366 ************************************ 00:11:32.366 00:23:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:32.366 * Looking for test storage... 00:11:32.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:32.366 00:23:48 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:32.366 00:23:48 -- nvmf/common.sh@7 -- # uname -s 00:11:32.366 00:23:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.366 00:23:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.366 00:23:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.366 00:23:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.366 00:23:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.366 00:23:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.366 00:23:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.366 00:23:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.366 00:23:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.366 00:23:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.366 00:23:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:11:32.367 00:23:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:11:32.367 00:23:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.367 00:23:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.367 00:23:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:32.367 00:23:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.367 00:23:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.367 00:23:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.367 00:23:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.367 00:23:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.367 00:23:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.367 00:23:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.367 00:23:48 -- paths/export.sh@5 -- # export PATH 00:11:32.367 00:23:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.367 00:23:48 -- nvmf/common.sh@46 -- # : 0 00:11:32.367 00:23:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:32.367 00:23:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:32.367 00:23:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:32.367 00:23:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.367 00:23:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.367 00:23:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:32.367 00:23:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:32.367 00:23:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:32.367 00:23:48 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.367 00:23:48 -- fips/fips.sh@89 -- # check_openssl_version 00:11:32.367 00:23:48 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:32.367 00:23:48 -- fips/fips.sh@85 -- # openssl version 00:11:32.367 00:23:48 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:32.367 00:23:48 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:11:32.367 00:23:48 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:11:32.367 00:23:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:32.367 00:23:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:32.367 00:23:48 -- scripts/common.sh@335 -- # IFS=.-: 00:11:32.367 00:23:48 -- scripts/common.sh@335 -- # read -ra ver1 00:11:32.367 00:23:48 -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.367 00:23:48 -- scripts/common.sh@336 -- # read -ra ver2 00:11:32.367 00:23:48 -- scripts/common.sh@337 -- # local 'op=>=' 00:11:32.367 00:23:48 -- scripts/common.sh@339 -- # ver1_l=3 00:11:32.367 00:23:48 -- scripts/common.sh@340 -- # ver2_l=3 00:11:32.367 00:23:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:32.367 00:23:48 -- scripts/common.sh@343 -- # case "$op" in 00:11:32.367 00:23:48 -- scripts/common.sh@347 -- # : 1 00:11:32.367 00:23:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:32.367 00:23:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.367 00:23:48 -- scripts/common.sh@364 -- # decimal 3 00:11:32.367 00:23:48 -- scripts/common.sh@352 -- # local d=3 00:11:32.367 00:23:48 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:32.367 00:23:48 -- scripts/common.sh@354 -- # echo 3 00:11:32.367 00:23:48 -- scripts/common.sh@364 -- # ver1[v]=3 00:11:32.367 00:23:48 -- scripts/common.sh@365 -- # decimal 3 00:11:32.367 00:23:48 -- scripts/common.sh@352 -- # local d=3 00:11:32.367 00:23:48 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:32.367 00:23:48 -- scripts/common.sh@354 -- # echo 3 00:11:32.367 00:23:48 -- scripts/common.sh@365 -- # ver2[v]=3 00:11:32.367 00:23:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:32.367 00:23:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:32.367 00:23:48 -- scripts/common.sh@363 -- # (( v++ )) 00:11:32.367 00:23:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.367 00:23:48 -- scripts/common.sh@364 -- # decimal 1 00:11:32.367 00:23:48 -- scripts/common.sh@352 -- # local d=1 00:11:32.367 00:23:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.367 00:23:48 -- scripts/common.sh@354 -- # echo 1 00:11:32.367 00:23:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:32.367 00:23:48 -- scripts/common.sh@365 -- # decimal 0 00:11:32.367 00:23:48 -- scripts/common.sh@352 -- # local d=0 00:11:32.367 00:23:48 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:32.367 00:23:48 -- scripts/common.sh@354 -- # echo 0 00:11:32.367 00:23:48 -- scripts/common.sh@365 -- # ver2[v]=0 00:11:32.367 00:23:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:32.367 00:23:48 -- scripts/common.sh@366 -- # return 0 00:11:32.367 00:23:48 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:32.367 00:23:48 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:32.367 00:23:48 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:32.367 00:23:48 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:32.367 00:23:48 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:32.367 00:23:48 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:32.367 00:23:48 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:32.367 00:23:48 -- fips/fips.sh@113 -- # build_openssl_config 00:11:32.367 00:23:48 -- fips/fips.sh@37 -- # cat 00:11:32.367 00:23:48 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:32.367 00:23:48 -- fips/fips.sh@58 -- # cat - 00:11:32.367 00:23:48 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:32.367 00:23:48 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:32.367 00:23:48 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:32.367 00:23:48 -- fips/fips.sh@116 -- # openssl list -providers 00:11:32.367 00:23:48 -- fips/fips.sh@116 -- # grep name 00:11:32.626 00:23:48 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:32.626 00:23:48 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:32.626 00:23:48 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:32.626 00:23:48 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:32.626 00:23:48 -- fips/fips.sh@127 -- # : 00:11:32.626 00:23:48 -- common/autotest_common.sh@640 -- # local es=0 00:11:32.626 00:23:48 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:32.626 00:23:48 -- common/autotest_common.sh@628 -- # local arg=openssl 00:11:32.626 00:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:32.626 00:23:48 -- common/autotest_common.sh@632 -- # type -t openssl 00:11:32.626 00:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:32.626 00:23:48 -- common/autotest_common.sh@634 -- # type -P openssl 00:11:32.626 00:23:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:32.626 00:23:48 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:11:32.626 00:23:48 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:11:32.626 00:23:48 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:11:32.626 Error setting digest 00:11:32.626 40C26E93A57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:11:32.626 40C26E93A57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:11:32.626 00:23:48 -- common/autotest_common.sh@643 -- # es=1 00:11:32.626 00:23:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:32.626 00:23:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:32.626 00:23:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:32.626 00:23:48 -- fips/fips.sh@130 -- # nvmftestinit 00:11:32.627 00:23:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:32.627 00:23:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.627 00:23:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:32.627 00:23:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:32.627 00:23:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:32.627 00:23:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.627 00:23:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.627 00:23:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.627 00:23:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:32.627 00:23:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:32.627 00:23:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:32.627 00:23:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:32.627 00:23:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:32.627 00:23:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:32.627 00:23:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.627 00:23:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.627 00:23:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:32.627 00:23:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:32.627 00:23:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:32.627 00:23:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:32.627 00:23:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:32.627 00:23:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.627 00:23:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:32.627 00:23:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:32.627 00:23:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:32.627 00:23:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:32.627 00:23:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:32.627 00:23:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:32.627 Cannot find device "nvmf_tgt_br" 00:11:32.627 00:23:48 -- nvmf/common.sh@154 -- # true 00:11:32.627 00:23:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:32.627 Cannot find device "nvmf_tgt_br2" 00:11:32.627 00:23:48 -- nvmf/common.sh@155 -- # true 00:11:32.627 00:23:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:32.627 00:23:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:32.627 Cannot find device "nvmf_tgt_br" 00:11:32.627 00:23:48 -- nvmf/common.sh@157 -- # true 00:11:32.627 00:23:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:32.627 Cannot find device "nvmf_tgt_br2" 00:11:32.627 00:23:48 -- nvmf/common.sh@158 -- # true 00:11:32.627 00:23:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:32.627 00:23:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:32.627 00:23:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.627 00:23:48 -- nvmf/common.sh@161 -- # true 00:11:32.627 00:23:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.627 00:23:48 -- nvmf/common.sh@162 -- # true 00:11:32.627 00:23:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.627 00:23:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.627 00:23:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.627 00:23:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.627 00:23:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.627 00:23:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.886 00:23:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.886 00:23:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:32.886 00:23:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:32.886 00:23:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:32.886 00:23:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:32.886 00:23:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:32.886 00:23:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:32.886 00:23:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.886 00:23:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.886 00:23:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.886 00:23:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:32.886 00:23:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:32.886 00:23:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.886 00:23:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:32.886 00:23:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:32.886 00:23:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:32.886 00:23:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:32.886 00:23:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:32.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:32.886 00:11:32.886 --- 10.0.0.2 ping statistics --- 00:11:32.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.886 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:32.886 00:23:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:32.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:32.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:32.886 00:11:32.886 --- 10.0.0.3 ping statistics --- 00:11:32.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.886 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:32.886 00:23:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:32.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:32.886 00:11:32.886 --- 10.0.0.1 ping statistics --- 00:11:32.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.886 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:32.886 00:23:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.886 00:23:48 -- nvmf/common.sh@421 -- # return 0 00:11:32.886 00:23:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:32.886 00:23:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.886 00:23:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:32.886 00:23:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:32.886 00:23:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.886 00:23:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:32.886 00:23:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:32.886 00:23:48 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:32.886 00:23:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:32.886 00:23:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:32.886 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:11:32.886 00:23:48 -- nvmf/common.sh@469 -- # nvmfpid=65547 00:11:32.886 00:23:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:32.886 00:23:48 -- nvmf/common.sh@470 -- # waitforlisten 65547 00:11:32.886 00:23:48 -- common/autotest_common.sh@819 -- # '[' -z 65547 ']' 00:11:32.886 00:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.886 00:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:32.886 00:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.886 00:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:32.886 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:11:33.145 [2024-09-29 00:23:48.736077] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:33.145 [2024-09-29 00:23:48.736457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.145 [2024-09-29 00:23:48.877993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.145 [2024-09-29 00:23:48.932133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:33.145 [2024-09-29 00:23:48.932313] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.145 [2024-09-29 00:23:48.932328] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.146 [2024-09-29 00:23:48.932337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.146 [2024-09-29 00:23:48.932405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.080 00:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:34.080 00:23:49 -- common/autotest_common.sh@852 -- # return 0 00:11:34.080 00:23:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:34.080 00:23:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:34.080 00:23:49 -- common/autotest_common.sh@10 -- # set +x 00:11:34.080 00:23:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.080 00:23:49 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:34.080 00:23:49 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:34.080 00:23:49 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:34.080 00:23:49 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:34.080 00:23:49 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:34.080 00:23:49 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:34.080 00:23:49 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:34.080 00:23:49 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:34.339 [2024-09-29 00:23:50.030264] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.339 [2024-09-29 00:23:50.046234] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:34.339 [2024-09-29 00:23:50.046467] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.339 malloc0 00:11:34.339 00:23:50 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:34.339 00:23:50 -- fips/fips.sh@147 -- # bdevperf_pid=65587 00:11:34.339 00:23:50 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:34.339 00:23:50 -- fips/fips.sh@148 -- # waitforlisten 65587 /var/tmp/bdevperf.sock 00:11:34.339 00:23:50 -- common/autotest_common.sh@819 -- # '[' -z 65587 ']' 00:11:34.339 00:23:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:34.339 00:23:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.339 00:23:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:34.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:34.339 00:23:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.339 00:23:50 -- common/autotest_common.sh@10 -- # set +x 00:11:34.339 [2024-09-29 00:23:50.179741] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:34.339 [2024-09-29 00:23:50.179862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65587 ] 00:11:34.597 [2024-09-29 00:23:50.312755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.597 [2024-09-29 00:23:50.373835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.533 00:23:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:35.533 00:23:51 -- common/autotest_common.sh@852 -- # return 0 00:11:35.533 00:23:51 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:35.792 [2024-09-29 00:23:51.395010] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:35.792 TLSTESTn1 00:11:35.792 00:23:51 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:35.792 Running I/O for 10 seconds... 00:11:47.996 00:11:47.996 Latency(us) 00:11:47.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.996 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:47.996 Verification LBA range: start 0x0 length 0x2000 00:11:47.996 TLSTESTn1 : 10.01 5997.05 23.43 0.00 0.00 21308.60 4468.36 20018.27 00:11:47.996 =================================================================================================================== 00:11:47.996 Total : 5997.05 23.43 0.00 0.00 21308.60 4468.36 20018.27 00:11:47.996 0 00:11:47.996 00:24:01 -- fips/fips.sh@1 -- # cleanup 00:11:47.996 00:24:01 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:47.996 00:24:01 -- common/autotest_common.sh@796 -- # type=--id 00:11:47.996 00:24:01 -- common/autotest_common.sh@797 -- # id=0 00:11:47.996 00:24:01 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:11:47.996 00:24:01 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:47.996 00:24:01 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:11:47.996 00:24:01 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:11:47.996 00:24:01 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:11:47.996 00:24:01 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:47.996 nvmf_trace.0 00:11:47.996 00:24:01 -- common/autotest_common.sh@811 -- # return 0 00:11:47.996 00:24:01 -- fips/fips.sh@16 -- # killprocess 65587 00:11:47.996 00:24:01 -- common/autotest_common.sh@926 -- # '[' -z 65587 ']' 00:11:47.996 00:24:01 -- common/autotest_common.sh@930 -- # kill -0 65587 00:11:47.996 00:24:01 -- common/autotest_common.sh@931 -- # uname 00:11:47.996 00:24:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.996 00:24:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65587 00:11:47.996 killing process with pid 65587 00:11:47.996 Received shutdown signal, test time was about 10.000000 seconds 00:11:47.996 00:11:47.996 Latency(us) 00:11:47.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.996 =================================================================================================================== 00:11:47.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:47.996 00:24:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:47.996 00:24:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:47.996 00:24:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65587' 00:11:47.996 00:24:01 -- common/autotest_common.sh@945 -- # kill 65587 00:11:47.996 00:24:01 -- common/autotest_common.sh@950 -- # wait 65587 00:11:47.996 00:24:01 -- fips/fips.sh@17 -- # nvmftestfini 00:11:47.997 00:24:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:47.997 00:24:01 -- nvmf/common.sh@116 -- # sync 00:11:47.997 00:24:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:47.997 00:24:01 -- nvmf/common.sh@119 -- # set +e 00:11:47.997 00:24:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:47.997 00:24:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:47.997 rmmod nvme_tcp 00:11:47.997 rmmod nvme_fabrics 00:11:47.997 rmmod nvme_keyring 00:11:47.997 00:24:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:47.997 00:24:02 -- nvmf/common.sh@123 -- # set -e 00:11:47.997 00:24:02 -- nvmf/common.sh@124 -- # return 0 00:11:47.997 00:24:02 -- nvmf/common.sh@477 -- # '[' -n 65547 ']' 00:11:47.997 00:24:02 -- nvmf/common.sh@478 -- # killprocess 65547 00:11:47.997 00:24:02 -- common/autotest_common.sh@926 -- # '[' -z 65547 ']' 00:11:47.997 00:24:02 -- common/autotest_common.sh@930 -- # kill -0 65547 00:11:47.997 00:24:02 -- common/autotest_common.sh@931 -- # uname 00:11:47.997 00:24:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.997 00:24:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65547 00:11:47.997 killing process with pid 65547 00:11:47.997 00:24:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:47.997 00:24:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:47.997 00:24:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65547' 00:11:47.997 00:24:02 -- common/autotest_common.sh@945 -- # kill 65547 00:11:47.997 00:24:02 -- common/autotest_common.sh@950 -- # wait 65547 00:11:47.997 00:24:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:47.997 00:24:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:47.997 00:24:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:47.997 00:24:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.997 00:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.997 00:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.997 00:24:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:47.997 00:24:02 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:47.997 00:11:47.997 real 0m14.267s 00:11:47.997 user 0m19.539s 00:11:47.997 sys 0m5.714s 00:11:47.997 00:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.997 00:24:02 -- common/autotest_common.sh@10 -- # set +x 00:11:47.997 ************************************ 00:11:47.997 END TEST nvmf_fips 00:11:47.997 ************************************ 00:11:47.997 00:24:02 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:47.997 00:24:02 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:47.997 00:24:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:47.997 00:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.997 00:24:02 -- common/autotest_common.sh@10 -- # set +x 00:11:47.997 ************************************ 00:11:47.997 START TEST nvmf_fuzz 00:11:47.997 ************************************ 00:11:47.997 00:24:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:47.997 * Looking for test storage... 00:11:47.997 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.997 00:24:02 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.997 00:24:02 -- nvmf/common.sh@7 -- # uname -s 00:11:47.997 00:24:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.997 00:24:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.997 00:24:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.997 00:24:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.997 00:24:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.997 00:24:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.997 00:24:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.997 00:24:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.997 00:24:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.997 00:24:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:11:47.997 00:24:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:11:47.997 00:24:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.997 00:24:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.997 00:24:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.997 00:24:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.997 00:24:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.997 00:24:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.997 00:24:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.997 00:24:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.997 00:24:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.997 00:24:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.997 00:24:02 -- paths/export.sh@5 -- # export PATH 00:11:47.997 00:24:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.997 00:24:02 -- nvmf/common.sh@46 -- # : 0 00:11:47.997 00:24:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:47.997 00:24:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:47.997 00:24:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:47.997 00:24:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.997 00:24:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.997 00:24:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:47.997 00:24:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:47.997 00:24:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:47.997 00:24:02 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:47.997 00:24:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:47.997 00:24:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.997 00:24:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:47.997 00:24:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:47.997 00:24:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:47.997 00:24:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.997 00:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.997 00:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.997 00:24:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:47.997 00:24:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:47.997 00:24:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.997 00:24:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.997 00:24:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:47.997 00:24:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:47.997 00:24:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.997 00:24:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.997 00:24:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.997 00:24:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.997 00:24:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.997 00:24:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.997 00:24:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.997 00:24:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.997 00:24:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:47.997 00:24:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:47.997 Cannot find device "nvmf_tgt_br" 00:11:47.997 00:24:02 -- nvmf/common.sh@154 -- # true 00:11:47.997 00:24:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.997 Cannot find device "nvmf_tgt_br2" 00:11:47.997 00:24:02 -- nvmf/common.sh@155 -- # true 00:11:47.997 00:24:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:47.997 00:24:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:47.997 Cannot find device "nvmf_tgt_br" 00:11:47.997 00:24:02 -- nvmf/common.sh@157 -- # true 00:11:47.997 00:24:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:47.997 Cannot find device "nvmf_tgt_br2" 00:11:47.997 00:24:02 -- nvmf/common.sh@158 -- # true 00:11:47.997 00:24:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:47.997 00:24:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:47.997 00:24:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.997 00:24:02 -- nvmf/common.sh@161 -- # true 00:11:47.997 00:24:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.997 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.997 00:24:02 -- nvmf/common.sh@162 -- # true 00:11:47.997 00:24:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.997 00:24:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.997 00:24:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.997 00:24:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.998 00:24:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.998 00:24:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.998 00:24:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.998 00:24:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:47.998 00:24:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:47.998 00:24:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:47.998 00:24:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:47.998 00:24:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:47.998 00:24:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:47.998 00:24:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.998 00:24:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.998 00:24:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.998 00:24:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:47.998 00:24:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:47.998 00:24:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.998 00:24:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.998 00:24:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.998 00:24:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.998 00:24:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.998 00:24:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:47.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:47.998 00:11:47.998 --- 10.0.0.2 ping statistics --- 00:11:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.998 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:47.998 00:24:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:47.998 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.998 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:11:47.998 00:11:47.998 --- 10.0.0.3 ping statistics --- 00:11:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.998 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:47.998 00:24:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:11:47.998 00:11:47.998 --- 10.0.0.1 ping statistics --- 00:11:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.998 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:47.998 00:24:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.998 00:24:02 -- nvmf/common.sh@421 -- # return 0 00:11:47.998 00:24:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:47.998 00:24:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.998 00:24:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:47.998 00:24:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:47.998 00:24:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.998 00:24:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:47.998 00:24:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:47.998 00:24:02 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=65919 00:11:47.998 00:24:02 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:47.998 00:24:02 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:47.998 00:24:02 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 65919 00:11:47.998 00:24:02 -- common/autotest_common.sh@819 -- # '[' -z 65919 ']' 00:11:47.998 00:24:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.998 00:24:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:47.998 00:24:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.998 00:24:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:47.998 00:24:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.257 00:24:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.257 00:24:03 -- common/autotest_common.sh@852 -- # return 0 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.257 00:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.257 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.257 00:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:48.257 00:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.257 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.257 Malloc0 00:11:48.257 00:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.257 00:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.257 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.257 00:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.257 00:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.257 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.257 00:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.257 00:24:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.257 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:11:48.257 00:24:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:48.257 00:24:03 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:48.516 Shutting down the fuzz application 00:11:48.516 00:24:04 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:48.776 Shutting down the fuzz application 00:11:48.776 00:24:04 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.776 00:24:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.776 00:24:04 -- common/autotest_common.sh@10 -- # set +x 00:11:48.776 00:24:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.776 00:24:04 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:48.776 00:24:04 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:48.776 00:24:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:48.776 00:24:04 -- nvmf/common.sh@116 -- # sync 00:11:48.776 00:24:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:48.776 00:24:04 -- nvmf/common.sh@119 -- # set +e 00:11:48.776 00:24:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:48.776 00:24:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:48.776 rmmod nvme_tcp 00:11:48.776 rmmod nvme_fabrics 00:11:49.035 rmmod nvme_keyring 00:11:49.035 00:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:49.035 00:24:04 -- nvmf/common.sh@123 -- # set -e 00:11:49.035 00:24:04 -- nvmf/common.sh@124 -- # return 0 00:11:49.035 00:24:04 -- nvmf/common.sh@477 -- # '[' -n 65919 ']' 00:11:49.035 00:24:04 -- nvmf/common.sh@478 -- # killprocess 65919 00:11:49.035 00:24:04 -- common/autotest_common.sh@926 -- # '[' -z 65919 ']' 00:11:49.035 00:24:04 -- common/autotest_common.sh@930 -- # kill -0 65919 00:11:49.035 00:24:04 -- common/autotest_common.sh@931 -- # uname 00:11:49.035 00:24:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:49.035 00:24:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65919 00:11:49.035 00:24:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:49.035 00:24:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:49.035 00:24:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65919' 00:11:49.035 killing process with pid 65919 00:11:49.035 00:24:04 -- common/autotest_common.sh@945 -- # kill 65919 00:11:49.035 00:24:04 -- common/autotest_common.sh@950 -- # wait 65919 00:11:49.295 00:24:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:49.295 00:24:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:49.295 00:24:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:49.295 00:24:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.295 00:24:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:49.295 00:24:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.295 00:24:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.295 00:24:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.295 00:24:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:49.295 00:24:04 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:49.295 ************************************ 00:11:49.295 END TEST nvmf_fuzz 00:11:49.295 ************************************ 00:11:49.295 00:11:49.295 real 0m2.594s 00:11:49.295 user 0m2.803s 00:11:49.295 sys 0m0.515s 00:11:49.295 00:24:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.295 00:24:04 -- common/autotest_common.sh@10 -- # set +x 00:11:49.295 00:24:04 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:49.295 00:24:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:49.295 00:24:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.295 00:24:04 -- common/autotest_common.sh@10 -- # set +x 00:11:49.295 ************************************ 00:11:49.295 START TEST nvmf_multiconnection 00:11:49.295 ************************************ 00:11:49.295 00:24:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:49.295 * Looking for test storage... 00:11:49.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.295 00:24:05 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.295 00:24:05 -- nvmf/common.sh@7 -- # uname -s 00:11:49.295 00:24:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.295 00:24:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.295 00:24:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.295 00:24:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.295 00:24:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.295 00:24:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.295 00:24:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.295 00:24:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.295 00:24:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.295 00:24:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.295 00:24:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:11:49.295 00:24:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:11:49.295 00:24:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.295 00:24:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.295 00:24:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.295 00:24:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.295 00:24:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.295 00:24:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.295 00:24:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.295 00:24:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.295 00:24:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.295 00:24:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.295 00:24:05 -- paths/export.sh@5 -- # export PATH 00:11:49.295 00:24:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.295 00:24:05 -- nvmf/common.sh@46 -- # : 0 00:11:49.295 00:24:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:49.295 00:24:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:49.295 00:24:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:49.295 00:24:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.295 00:24:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.295 00:24:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:49.295 00:24:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:49.295 00:24:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:49.295 00:24:05 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.295 00:24:05 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.295 00:24:05 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:49.295 00:24:05 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:49.295 00:24:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:49.295 00:24:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.295 00:24:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:49.295 00:24:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:49.295 00:24:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:49.295 00:24:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.295 00:24:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.295 00:24:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.295 00:24:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:49.295 00:24:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:49.295 00:24:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:49.295 00:24:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:49.295 00:24:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:49.295 00:24:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:49.295 00:24:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.295 00:24:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.295 00:24:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.295 00:24:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:49.295 00:24:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.295 00:24:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.295 00:24:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.295 00:24:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.295 00:24:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.295 00:24:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.295 00:24:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.295 00:24:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.295 00:24:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:49.295 00:24:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:49.295 Cannot find device "nvmf_tgt_br" 00:11:49.295 00:24:05 -- nvmf/common.sh@154 -- # true 00:11:49.295 00:24:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.295 Cannot find device "nvmf_tgt_br2" 00:11:49.295 00:24:05 -- nvmf/common.sh@155 -- # true 00:11:49.295 00:24:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:49.554 00:24:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:49.554 Cannot find device "nvmf_tgt_br" 00:11:49.554 00:24:05 -- nvmf/common.sh@157 -- # true 00:11:49.554 00:24:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:49.554 Cannot find device "nvmf_tgt_br2" 00:11:49.554 00:24:05 -- nvmf/common.sh@158 -- # true 00:11:49.554 00:24:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:49.554 00:24:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:49.554 00:24:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.554 00:24:05 -- nvmf/common.sh@161 -- # true 00:11:49.554 00:24:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.554 00:24:05 -- nvmf/common.sh@162 -- # true 00:11:49.554 00:24:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.554 00:24:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.554 00:24:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.554 00:24:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.554 00:24:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.554 00:24:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.554 00:24:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.554 00:24:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.554 00:24:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.554 00:24:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:49.554 00:24:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:49.554 00:24:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:49.554 00:24:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:49.554 00:24:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.554 00:24:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.554 00:24:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.554 00:24:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:49.554 00:24:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:49.554 00:24:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.554 00:24:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.554 00:24:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.812 00:24:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.812 00:24:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.812 00:24:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:49.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:49.812 00:11:49.812 --- 10.0.0.2 ping statistics --- 00:11:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.812 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:49.812 00:24:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:49.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:49.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:11:49.812 00:11:49.812 --- 10.0.0.3 ping statistics --- 00:11:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.812 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:49.812 00:24:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:49.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:49.812 00:11:49.812 --- 10.0.0.1 ping statistics --- 00:11:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.812 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:49.812 00:24:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.812 00:24:05 -- nvmf/common.sh@421 -- # return 0 00:11:49.812 00:24:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:49.812 00:24:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.812 00:24:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:49.812 00:24:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:49.812 00:24:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.812 00:24:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:49.812 00:24:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:49.812 00:24:05 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:49.812 00:24:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:49.812 00:24:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:49.812 00:24:05 -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 00:24:05 -- nvmf/common.sh@469 -- # nvmfpid=66107 00:11:49.812 00:24:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.812 00:24:05 -- nvmf/common.sh@470 -- # waitforlisten 66107 00:11:49.812 00:24:05 -- common/autotest_common.sh@819 -- # '[' -z 66107 ']' 00:11:49.812 00:24:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.812 00:24:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:49.812 00:24:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.812 00:24:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:49.812 00:24:05 -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 [2024-09-29 00:24:05.507394] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:11:49.812 [2024-09-29 00:24:05.507497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.812 [2024-09-29 00:24:05.644326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.070 [2024-09-29 00:24:05.700959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:50.070 [2024-09-29 00:24:05.701515] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.070 [2024-09-29 00:24:05.701797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.070 [2024-09-29 00:24:05.702007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.070 [2024-09-29 00:24:05.702377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.070 [2024-09-29 00:24:05.702637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.070 [2024-09-29 00:24:05.702469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.070 [2024-09-29 00:24:05.702522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.008 00:24:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:51.008 00:24:06 -- common/autotest_common.sh@852 -- # return 0 00:11:51.008 00:24:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:51.008 00:24:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 00:24:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.008 00:24:06 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 [2024-09-29 00:24:06.538788] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@21 -- # seq 1 11 00:11:51.008 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.008 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 Malloc1 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 [2024-09-29 00:24:06.617067] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.008 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 Malloc2 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.008 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:51.008 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.008 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.009 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 Malloc3 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.009 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 Malloc4 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.009 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 Malloc5 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.009 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 Malloc6 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.009 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 Malloc7 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.009 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.009 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.009 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:11:51.009 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.009 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 Malloc8 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.269 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 Malloc9 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.269 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 Malloc10 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.269 00:24:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:11:51.269 00:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:06 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 Malloc11 00:11:51.269 00:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:11:51.269 00:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:07 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:11:51.269 00:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:07 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:11:51.269 00:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.269 00:24:07 -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 00:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.269 00:24:07 -- target/multiconnection.sh@28 -- # seq 1 11 00:11:51.269 00:24:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.269 00:24:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.529 00:24:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:11:51.529 00:24:07 -- common/autotest_common.sh@1177 -- # local i=0 00:11:51.529 00:24:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.529 00:24:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:51.529 00:24:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:53.434 00:24:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:53.434 00:24:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:53.434 00:24:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:11:53.434 00:24:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:53.434 00:24:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.434 00:24:09 -- common/autotest_common.sh@1187 -- # return 0 00:11:53.434 00:24:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:53.434 00:24:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:11:53.692 00:24:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:11:53.692 00:24:09 -- common/autotest_common.sh@1177 -- # local i=0 00:11:53.692 00:24:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.692 00:24:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:53.692 00:24:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:55.602 00:24:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:55.602 00:24:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:55.602 00:24:11 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:11:55.602 00:24:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:55.602 00:24:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.602 00:24:11 -- common/autotest_common.sh@1187 -- # return 0 00:11:55.602 00:24:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:55.602 00:24:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:11:55.861 00:24:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:11:55.861 00:24:11 -- common/autotest_common.sh@1177 -- # local i=0 00:11:55.861 00:24:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.861 00:24:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:55.861 00:24:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:57.767 00:24:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:57.767 00:24:13 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:11:57.767 00:24:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:57.767 00:24:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:57.767 00:24:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.767 00:24:13 -- common/autotest_common.sh@1187 -- # return 0 00:11:57.767 00:24:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:57.767 00:24:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:11:58.026 00:24:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:11:58.026 00:24:13 -- common/autotest_common.sh@1177 -- # local i=0 00:11:58.026 00:24:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.026 00:24:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:58.026 00:24:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:59.928 00:24:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:59.929 00:24:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:59.929 00:24:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:11:59.929 00:24:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:59.929 00:24:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.929 00:24:15 -- common/autotest_common.sh@1187 -- # return 0 00:11:59.929 00:24:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:59.929 00:24:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:00.187 00:24:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:00.187 00:24:15 -- common/autotest_common.sh@1177 -- # local i=0 00:12:00.187 00:24:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.187 00:24:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:00.187 00:24:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:02.086 00:24:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:02.086 00:24:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:02.086 00:24:17 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:12:02.086 00:24:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:02.086 00:24:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.086 00:24:17 -- common/autotest_common.sh@1187 -- # return 0 00:12:02.086 00:24:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:02.086 00:24:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:02.344 00:24:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:02.344 00:24:17 -- common/autotest_common.sh@1177 -- # local i=0 00:12:02.344 00:24:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.344 00:24:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:02.344 00:24:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:04.243 00:24:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:04.243 00:24:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:04.243 00:24:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:12:04.243 00:24:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:04.243 00:24:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.243 00:24:19 -- common/autotest_common.sh@1187 -- # return 0 00:12:04.243 00:24:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:04.243 00:24:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:04.501 00:24:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:04.501 00:24:20 -- common/autotest_common.sh@1177 -- # local i=0 00:12:04.501 00:24:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.501 00:24:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:04.501 00:24:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:06.400 00:24:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:06.400 00:24:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:06.400 00:24:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:12:06.400 00:24:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:06.400 00:24:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.400 00:24:22 -- common/autotest_common.sh@1187 -- # return 0 00:12:06.400 00:24:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:06.401 00:24:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:06.659 00:24:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:06.659 00:24:22 -- common/autotest_common.sh@1177 -- # local i=0 00:12:06.659 00:24:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.659 00:24:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:06.659 00:24:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:08.614 00:24:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:08.615 00:24:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:08.615 00:24:24 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:12:08.615 00:24:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:08.615 00:24:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.615 00:24:24 -- common/autotest_common.sh@1187 -- # return 0 00:12:08.615 00:24:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:08.615 00:24:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:08.615 00:24:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:08.615 00:24:24 -- common/autotest_common.sh@1177 -- # local i=0 00:12:08.615 00:24:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.615 00:24:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:08.615 00:24:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:11.145 00:24:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:11.145 00:24:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:11.145 00:24:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:12:11.145 00:24:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:11.145 00:24:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.145 00:24:26 -- common/autotest_common.sh@1187 -- # return 0 00:12:11.145 00:24:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:11.145 00:24:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:11.145 00:24:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:11.145 00:24:26 -- common/autotest_common.sh@1177 -- # local i=0 00:12:11.145 00:24:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.145 00:24:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:11.145 00:24:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:13.046 00:24:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:13.046 00:24:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:13.046 00:24:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:12:13.046 00:24:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:13.046 00:24:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.046 00:24:28 -- common/autotest_common.sh@1187 -- # return 0 00:12:13.046 00:24:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:13.046 00:24:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:13.046 00:24:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:13.046 00:24:28 -- common/autotest_common.sh@1177 -- # local i=0 00:12:13.046 00:24:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.046 00:24:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:13.046 00:24:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:15.576 00:24:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:15.576 00:24:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:15.576 00:24:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:12:15.576 00:24:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:15.576 00:24:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.576 00:24:30 -- common/autotest_common.sh@1187 -- # return 0 00:12:15.576 00:24:30 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:15.576 [global] 00:12:15.576 thread=1 00:12:15.576 invalidate=1 00:12:15.576 rw=read 00:12:15.576 time_based=1 00:12:15.576 runtime=10 00:12:15.576 ioengine=libaio 00:12:15.576 direct=1 00:12:15.576 bs=262144 00:12:15.576 iodepth=64 00:12:15.576 norandommap=1 00:12:15.576 numjobs=1 00:12:15.576 00:12:15.576 [job0] 00:12:15.576 filename=/dev/nvme0n1 00:12:15.576 [job1] 00:12:15.576 filename=/dev/nvme10n1 00:12:15.576 [job2] 00:12:15.576 filename=/dev/nvme1n1 00:12:15.576 [job3] 00:12:15.576 filename=/dev/nvme2n1 00:12:15.576 [job4] 00:12:15.576 filename=/dev/nvme3n1 00:12:15.576 [job5] 00:12:15.576 filename=/dev/nvme4n1 00:12:15.576 [job6] 00:12:15.576 filename=/dev/nvme5n1 00:12:15.576 [job7] 00:12:15.576 filename=/dev/nvme6n1 00:12:15.576 [job8] 00:12:15.576 filename=/dev/nvme7n1 00:12:15.576 [job9] 00:12:15.576 filename=/dev/nvme8n1 00:12:15.576 [job10] 00:12:15.576 filename=/dev/nvme9n1 00:12:15.576 Could not set queue depth (nvme0n1) 00:12:15.576 Could not set queue depth (nvme10n1) 00:12:15.576 Could not set queue depth (nvme1n1) 00:12:15.576 Could not set queue depth (nvme2n1) 00:12:15.576 Could not set queue depth (nvme3n1) 00:12:15.576 Could not set queue depth (nvme4n1) 00:12:15.576 Could not set queue depth (nvme5n1) 00:12:15.576 Could not set queue depth (nvme6n1) 00:12:15.576 Could not set queue depth (nvme7n1) 00:12:15.576 Could not set queue depth (nvme8n1) 00:12:15.576 Could not set queue depth (nvme9n1) 00:12:15.576 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:15.576 fio-3.35 00:12:15.576 Starting 11 threads 00:12:27.772 00:12:27.772 job0: (groupid=0, jobs=1): err= 0: pid=66567: Sun Sep 29 00:24:41 2024 00:12:27.772 read: IOPS=967, BW=242MiB/s (254MB/s)(2426MiB/10033msec) 00:12:27.772 slat (usec): min=20, max=42751, avg=1026.22, stdev=2251.24 00:12:27.772 clat (msec): min=22, max=117, avg=65.01, stdev= 8.22 00:12:27.772 lat (msec): min=24, max=117, avg=66.04, stdev= 8.27 00:12:27.772 clat percentiles (msec): 00:12:27.772 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 60], 00:12:27.772 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 66], 00:12:27.772 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 79], 00:12:27.772 | 99.00th=[ 95], 99.50th=[ 102], 99.90th=[ 115], 99.95th=[ 115], 00:12:27.772 | 99.99th=[ 118] 00:12:27.772 bw ( KiB/s): min=171350, max=262656, per=11.45%, avg=246678.10, stdev=18433.07, samples=20 00:12:27.772 iops : min= 669, max= 1026, avg=963.55, stdev=72.08, samples=20 00:12:27.772 lat (msec) : 50=1.40%, 100=98.04%, 250=0.56% 00:12:27.772 cpu : usr=0.35%, sys=3.60%, ctx=2151, majf=0, minf=4097 00:12:27.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:27.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.772 issued rwts: total=9705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.772 job1: (groupid=0, jobs=1): err= 0: pid=66568: Sun Sep 29 00:24:41 2024 00:12:27.772 read: IOPS=1054, BW=264MiB/s (276MB/s)(2642MiB/10022msec) 00:12:27.772 slat (usec): min=20, max=56055, avg=942.08, stdev=2145.45 00:12:27.772 clat (msec): min=9, max=131, avg=59.67, stdev= 6.59 00:12:27.772 lat (msec): min=10, max=131, avg=60.61, stdev= 6.58 00:12:27.772 clat percentiles (msec): 00:12:27.772 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 56], 00:12:27.772 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:12:27.772 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 66], 95.00th=[ 68], 00:12:27.772 | 99.00th=[ 73], 99.50th=[ 96], 99.90th=[ 124], 99.95th=[ 124], 00:12:27.772 | 99.99th=[ 124] 00:12:27.772 bw ( KiB/s): min=223744, max=275928, per=12.47%, avg=268763.60, stdev=11118.70, samples=20 00:12:27.772 iops : min= 874, max= 1077, avg=1049.60, stdev=43.38, samples=20 00:12:27.772 lat (msec) : 10=0.01%, 20=0.10%, 50=2.39%, 100=97.06%, 250=0.44% 00:12:27.772 cpu : usr=0.41%, sys=3.56%, ctx=2300, majf=0, minf=4098 00:12:27.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:27.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.772 issued rwts: total=10569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.772 job2: (groupid=0, jobs=1): err= 0: pid=66569: Sun Sep 29 00:24:41 2024 00:12:27.772 read: IOPS=732, BW=183MiB/s (192MB/s)(1845MiB/10075msec) 00:12:27.772 slat (usec): min=18, max=43277, avg=1352.21, stdev=2782.67 00:12:27.772 clat (msec): min=15, max=150, avg=85.91, stdev= 7.16 00:12:27.772 lat (msec): min=16, max=158, avg=87.27, stdev= 7.25 00:12:27.772 clat percentiles (msec): 00:12:27.772 | 1.00th=[ 63], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 83], 00:12:27.772 | 30.00th=[ 85], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 87], 00:12:27.772 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:12:27.772 | 99.00th=[ 103], 99.50th=[ 110], 99.90th=[ 136], 99.95th=[ 146], 00:12:27.772 | 99.99th=[ 150] 00:12:27.772 bw ( KiB/s): min=178176, max=192512, per=8.69%, avg=187158.20, stdev=3264.87, samples=20 00:12:27.772 iops : min= 696, max= 752, avg=730.95, stdev=12.75, samples=20 00:12:27.772 lat (msec) : 20=0.16%, 50=0.27%, 100=98.29%, 250=1.27% 00:12:27.772 cpu : usr=0.42%, sys=2.53%, ctx=1860, majf=0, minf=4097 00:12:27.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:27.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.772 issued rwts: total=7379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.772 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.772 job3: (groupid=0, jobs=1): err= 0: pid=66570: Sun Sep 29 00:24:41 2024 00:12:27.772 read: IOPS=556, BW=139MiB/s (146MB/s)(1403MiB/10088msec) 00:12:27.772 slat (usec): min=20, max=45727, avg=1777.24, stdev=3986.46 00:12:27.772 clat (msec): min=19, max=197, avg=113.05, stdev=11.85 00:12:27.772 lat (msec): min=20, max=197, avg=114.83, stdev=12.20 00:12:27.772 clat percentiles (msec): 00:12:27.772 | 1.00th=[ 72], 5.00th=[ 94], 10.00th=[ 108], 20.00th=[ 111], 00:12:27.772 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 115], 00:12:27.773 | 70.00th=[ 117], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 125], 00:12:27.773 | 99.00th=[ 136], 99.50th=[ 155], 99.90th=[ 192], 99.95th=[ 199], 00:12:27.773 | 99.99th=[ 199] 00:12:27.773 bw ( KiB/s): min=134656, max=174428, per=6.59%, avg=141948.75, stdev=8031.31, samples=20 00:12:27.773 iops : min= 526, max= 681, avg=554.20, stdev=31.40, samples=20 00:12:27.773 lat (msec) : 20=0.02%, 50=0.71%, 100=5.54%, 250=93.73% 00:12:27.773 cpu : usr=0.35%, sys=2.19%, ctx=1397, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=5611,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job4: (groupid=0, jobs=1): err= 0: pid=66571: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=968, BW=242MiB/s (254MB/s)(2428MiB/10028msec) 00:12:27.773 slat (usec): min=16, max=37915, avg=1017.56, stdev=2204.37 00:12:27.773 clat (msec): min=21, max=103, avg=64.95, stdev= 7.40 00:12:27.773 lat (msec): min=22, max=123, avg=65.96, stdev= 7.44 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 60], 00:12:27.773 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 66], 00:12:27.773 | 70.00th=[ 68], 80.00th=[ 70], 90.00th=[ 73], 95.00th=[ 78], 00:12:27.773 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 101], 00:12:27.773 | 99.99th=[ 104] 00:12:27.773 bw ( KiB/s): min=173568, max=265216, per=11.46%, avg=246939.55, stdev=17874.27, samples=20 00:12:27.773 iops : min= 678, max= 1036, avg=964.55, stdev=69.82, samples=20 00:12:27.773 lat (msec) : 50=0.74%, 100=99.14%, 250=0.12% 00:12:27.773 cpu : usr=0.45%, sys=3.00%, ctx=2231, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=9712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job5: (groupid=0, jobs=1): err= 0: pid=66572: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=731, BW=183MiB/s (192MB/s)(1843MiB/10074msec) 00:12:27.773 slat (usec): min=20, max=20381, avg=1352.48, stdev=2773.34 00:12:27.773 clat (msec): min=20, max=158, avg=85.96, stdev= 7.43 00:12:27.773 lat (msec): min=21, max=158, avg=87.31, stdev= 7.53 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 56], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 83], 00:12:27.773 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 86], 60.00th=[ 87], 00:12:27.773 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 92], 95.00th=[ 94], 00:12:27.773 | 99.00th=[ 103], 99.50th=[ 111], 99.90th=[ 148], 99.95th=[ 155], 00:12:27.773 | 99.99th=[ 159] 00:12:27.773 bw ( KiB/s): min=181760, max=194560, per=8.68%, avg=186980.20, stdev=2720.38, samples=20 00:12:27.773 iops : min= 710, max= 760, avg=730.25, stdev=10.69, samples=20 00:12:27.773 lat (msec) : 50=0.61%, 100=97.99%, 250=1.40% 00:12:27.773 cpu : usr=0.31%, sys=2.82%, ctx=1804, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=7372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job6: (groupid=0, jobs=1): err= 0: pid=66573: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=558, BW=140MiB/s (146MB/s)(1409MiB/10089msec) 00:12:27.773 slat (usec): min=18, max=38745, avg=1743.55, stdev=4086.47 00:12:27.773 clat (msec): min=25, max=195, avg=112.58, stdev=12.34 00:12:27.773 lat (msec): min=26, max=204, avg=114.33, stdev=12.85 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 64], 5.00th=[ 85], 10.00th=[ 108], 20.00th=[ 111], 00:12:27.773 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 115], 00:12:27.773 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 124], 00:12:27.773 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 197], 99.95th=[ 197], 00:12:27.773 | 99.99th=[ 197] 00:12:27.773 bw ( KiB/s): min=134656, max=184832, per=6.62%, avg=142624.55, stdev=10462.25, samples=20 00:12:27.773 iops : min= 526, max= 722, avg=556.90, stdev=40.94, samples=20 00:12:27.773 lat (msec) : 50=0.27%, 100=7.47%, 250=92.27% 00:12:27.773 cpu : usr=0.29%, sys=2.08%, ctx=1415, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=5637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job7: (groupid=0, jobs=1): err= 0: pid=66574: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=554, BW=139MiB/s (145MB/s)(1398MiB/10083msec) 00:12:27.773 slat (usec): min=19, max=44112, avg=1780.60, stdev=4077.81 00:12:27.773 clat (msec): min=15, max=200, avg=113.55, stdev=12.63 00:12:27.773 lat (msec): min=18, max=200, avg=115.33, stdev=13.01 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 43], 5.00th=[ 95], 10.00th=[ 108], 20.00th=[ 111], 00:12:27.773 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 116], 00:12:27.773 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 122], 95.00th=[ 126], 00:12:27.773 | 99.00th=[ 136], 99.50th=[ 159], 99.90th=[ 194], 99.95th=[ 194], 00:12:27.773 | 99.99th=[ 201] 00:12:27.773 bw ( KiB/s): min=133386, max=177664, per=6.56%, avg=141450.65, stdev=8954.59, samples=20 00:12:27.773 iops : min= 521, max= 694, avg=552.35, stdev=35.03, samples=20 00:12:27.773 lat (msec) : 20=0.07%, 50=1.04%, 100=5.04%, 250=93.85% 00:12:27.773 cpu : usr=0.32%, sys=1.83%, ctx=1374, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=5590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job8: (groupid=0, jobs=1): err= 0: pid=66575: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=724, BW=181MiB/s (190MB/s)(1824MiB/10071msec) 00:12:27.773 slat (usec): min=19, max=30798, avg=1357.85, stdev=2815.89 00:12:27.773 clat (msec): min=27, max=153, avg=86.82, stdev= 5.80 00:12:27.773 lat (msec): min=28, max=153, avg=88.18, stdev= 5.92 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 72], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 84], 00:12:27.773 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 88], 00:12:27.773 | 70.00th=[ 89], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 95], 00:12:27.773 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 142], 99.95th=[ 146], 00:12:27.773 | 99.99th=[ 153] 00:12:27.773 bw ( KiB/s): min=165707, max=190976, per=8.59%, avg=185124.60, stdev=5205.12, samples=20 00:12:27.773 iops : min= 647, max= 746, avg=723.00, stdev=20.34, samples=20 00:12:27.773 lat (msec) : 50=0.05%, 100=98.41%, 250=1.54% 00:12:27.773 cpu : usr=0.34%, sys=2.52%, ctx=1812, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=7296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job9: (groupid=0, jobs=1): err= 0: pid=66576: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=549, BW=137MiB/s (144MB/s)(1385MiB/10090msec) 00:12:27.773 slat (usec): min=20, max=56529, avg=1794.32, stdev=4316.28 00:12:27.773 clat (msec): min=25, max=209, avg=114.56, stdev= 9.46 00:12:27.773 lat (msec): min=25, max=209, avg=116.35, stdev= 9.96 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 80], 5.00th=[ 100], 10.00th=[ 109], 20.00th=[ 112], 00:12:27.773 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 116], 00:12:27.773 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 122], 95.00th=[ 125], 00:12:27.773 | 99.00th=[ 140], 99.50th=[ 155], 99.90th=[ 186], 99.95th=[ 186], 00:12:27.773 | 99.99th=[ 209] 00:12:27.773 bw ( KiB/s): min=131072, max=153394, per=6.50%, avg=140131.50, stdev=5112.05, samples=20 00:12:27.773 iops : min= 512, max= 599, avg=547.15, stdev=20.02, samples=20 00:12:27.773 lat (msec) : 50=0.05%, 100=5.20%, 250=94.75% 00:12:27.773 cpu : usr=0.21%, sys=1.95%, ctx=1383, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=5540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 job10: (groupid=0, jobs=1): err= 0: pid=66577: Sun Sep 29 00:24:41 2024 00:12:27.773 read: IOPS=1050, BW=263MiB/s (275MB/s)(2631MiB/10016msec) 00:12:27.773 slat (usec): min=19, max=57871, avg=945.95, stdev=2102.01 00:12:27.773 clat (msec): min=14, max=121, avg=59.89, stdev= 6.75 00:12:27.773 lat (msec): min=18, max=121, avg=60.83, stdev= 6.75 00:12:27.773 clat percentiles (msec): 00:12:27.773 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 56], 00:12:27.773 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:12:27.773 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 66], 95.00th=[ 68], 00:12:27.773 | 99.00th=[ 90], 99.50th=[ 100], 99.90th=[ 117], 99.95th=[ 118], 00:12:27.773 | 99.99th=[ 122] 00:12:27.773 bw ( KiB/s): min=206848, max=278060, per=12.43%, avg=267828.65, stdev=14784.00, samples=20 00:12:27.773 iops : min= 808, max= 1086, avg=1046.05, stdev=57.70, samples=20 00:12:27.773 lat (msec) : 20=0.11%, 50=1.80%, 100=97.64%, 250=0.45% 00:12:27.773 cpu : usr=0.47%, sys=3.67%, ctx=2314, majf=0, minf=4097 00:12:27.773 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:27.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:27.773 issued rwts: total=10524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.773 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.773 00:12:27.773 Run status group 0 (all jobs): 00:12:27.773 READ: bw=2104MiB/s (2207MB/s), 137MiB/s-264MiB/s (144MB/s-276MB/s), io=20.7GiB (22.3GB), run=10016-10090msec 00:12:27.773 00:12:27.773 Disk stats (read/write): 00:12:27.774 nvme0n1: ios=19313/0, merge=0/0, ticks=1235121/0, in_queue=1235121, util=97.79% 00:12:27.774 nvme10n1: ios=21048/0, merge=0/0, ticks=1238248/0, in_queue=1238248, util=98.00% 00:12:27.774 nvme1n1: ios=14640/0, merge=0/0, ticks=1228826/0, in_queue=1228826, util=98.19% 00:12:27.774 nvme2n1: ios=11115/0, merge=0/0, ticks=1225712/0, in_queue=1225712, util=98.16% 00:12:27.774 nvme3n1: ios=19324/0, merge=0/0, ticks=1234059/0, in_queue=1234059, util=98.18% 00:12:27.774 nvme4n1: ios=14644/0, merge=0/0, ticks=1230389/0, in_queue=1230389, util=98.55% 00:12:27.774 nvme5n1: ios=11165/0, merge=0/0, ticks=1227089/0, in_queue=1227089, util=98.49% 00:12:27.774 nvme6n1: ios=11059/0, merge=0/0, ticks=1224666/0, in_queue=1224666, util=98.63% 00:12:27.774 nvme7n1: ios=14472/0, merge=0/0, ticks=1228474/0, in_queue=1228474, util=98.85% 00:12:27.774 nvme8n1: ios=10957/0, merge=0/0, ticks=1225966/0, in_queue=1225966, util=98.99% 00:12:27.774 nvme9n1: ios=20930/0, merge=0/0, ticks=1237411/0, in_queue=1237411, util=99.02% 00:12:27.774 00:24:41 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:27.774 [global] 00:12:27.774 thread=1 00:12:27.774 invalidate=1 00:12:27.774 rw=randwrite 00:12:27.774 time_based=1 00:12:27.774 runtime=10 00:12:27.774 ioengine=libaio 00:12:27.774 direct=1 00:12:27.774 bs=262144 00:12:27.774 iodepth=64 00:12:27.774 norandommap=1 00:12:27.774 numjobs=1 00:12:27.774 00:12:27.774 [job0] 00:12:27.774 filename=/dev/nvme0n1 00:12:27.774 [job1] 00:12:27.774 filename=/dev/nvme10n1 00:12:27.774 [job2] 00:12:27.774 filename=/dev/nvme1n1 00:12:27.774 [job3] 00:12:27.774 filename=/dev/nvme2n1 00:12:27.774 [job4] 00:12:27.774 filename=/dev/nvme3n1 00:12:27.774 [job5] 00:12:27.774 filename=/dev/nvme4n1 00:12:27.774 [job6] 00:12:27.774 filename=/dev/nvme5n1 00:12:27.774 [job7] 00:12:27.774 filename=/dev/nvme6n1 00:12:27.774 [job8] 00:12:27.774 filename=/dev/nvme7n1 00:12:27.774 [job9] 00:12:27.774 filename=/dev/nvme8n1 00:12:27.774 [job10] 00:12:27.774 filename=/dev/nvme9n1 00:12:27.774 Could not set queue depth (nvme0n1) 00:12:27.774 Could not set queue depth (nvme10n1) 00:12:27.774 Could not set queue depth (nvme1n1) 00:12:27.774 Could not set queue depth (nvme2n1) 00:12:27.774 Could not set queue depth (nvme3n1) 00:12:27.774 Could not set queue depth (nvme4n1) 00:12:27.774 Could not set queue depth (nvme5n1) 00:12:27.774 Could not set queue depth (nvme6n1) 00:12:27.774 Could not set queue depth (nvme7n1) 00:12:27.774 Could not set queue depth (nvme8n1) 00:12:27.774 Could not set queue depth (nvme9n1) 00:12:27.774 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:27.774 fio-3.35 00:12:27.774 Starting 11 threads 00:12:37.751 00:12:37.751 job0: (groupid=0, jobs=1): err= 0: pid=66777: Sun Sep 29 00:24:52 2024 00:12:37.751 write: IOPS=507, BW=127MiB/s (133MB/s)(1282MiB/10104msec); 0 zone resets 00:12:37.751 slat (usec): min=20, max=50977, avg=1946.07, stdev=3367.98 00:12:37.751 clat (msec): min=52, max=222, avg=124.16, stdev= 8.26 00:12:37.751 lat (msec): min=52, max=222, avg=126.11, stdev= 7.68 00:12:37.751 clat percentiles (msec): 00:12:37.751 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 118], 20.00th=[ 120], 00:12:37.751 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:12:37.751 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:12:37.751 | 99.00th=[ 148], 99.50th=[ 176], 99.90th=[ 215], 99.95th=[ 215], 00:12:37.751 | 99.99th=[ 224] 00:12:37.751 bw ( KiB/s): min=116502, max=133120, per=8.64%, avg=129601.10, stdev=3747.24, samples=20 00:12:37.751 iops : min= 455, max= 520, avg=506.25, stdev=14.65, samples=20 00:12:37.751 lat (msec) : 100=0.57%, 250=99.43% 00:12:37.751 cpu : usr=0.90%, sys=1.48%, ctx=7165, majf=0, minf=1 00:12:37.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.751 issued rwts: total=0,5126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.751 job1: (groupid=0, jobs=1): err= 0: pid=66778: Sun Sep 29 00:24:52 2024 00:12:37.751 write: IOPS=723, BW=181MiB/s (190MB/s)(1819MiB/10053msec); 0 zone resets 00:12:37.751 slat (usec): min=15, max=22976, avg=1369.65, stdev=2359.68 00:12:37.751 clat (msec): min=10, max=122, avg=87.01, stdev=13.37 00:12:37.751 lat (msec): min=10, max=122, avg=88.38, stdev=13.38 00:12:37.751 clat percentiles (msec): 00:12:37.751 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 86], 00:12:37.751 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 92], 60.00th=[ 92], 00:12:37.751 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 95], 95.00th=[ 102], 00:12:37.751 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 123], 99.95th=[ 124], 00:12:37.751 | 99.99th=[ 124] 00:12:37.751 bw ( KiB/s): min=153600, max=272384, per=12.30%, avg=184562.45, stdev=30116.61, samples=20 00:12:37.751 iops : min= 600, max= 1064, avg=720.85, stdev=117.56, samples=20 00:12:37.751 lat (msec) : 20=0.16%, 50=0.22%, 100=93.82%, 250=5.80% 00:12:37.751 cpu : usr=1.09%, sys=1.75%, ctx=9610, majf=0, minf=1 00:12:37.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:37.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.751 issued rwts: total=0,7277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.751 job2: (groupid=0, jobs=1): err= 0: pid=66790: Sun Sep 29 00:24:52 2024 00:12:37.751 write: IOPS=411, BW=103MiB/s (108MB/s)(1047MiB/10174msec); 0 zone resets 00:12:37.751 slat (usec): min=18, max=21296, avg=2350.75, stdev=4138.02 00:12:37.751 clat (msec): min=9, max=356, avg=153.05, stdev=27.15 00:12:37.751 lat (msec): min=9, max=356, avg=155.40, stdev=27.29 00:12:37.751 clat percentiles (msec): 00:12:37.751 | 1.00th=[ 41], 5.00th=[ 116], 10.00th=[ 125], 20.00th=[ 148], 00:12:37.751 | 30.00th=[ 150], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 159], 00:12:37.751 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 188], 00:12:37.751 | 99.00th=[ 239], 99.50th=[ 296], 99.90th=[ 347], 99.95th=[ 347], 00:12:37.751 | 99.99th=[ 355] 00:12:37.751 bw ( KiB/s): min=84311, max=153088, per=7.04%, avg=105591.55, stdev=13773.05, samples=20 00:12:37.751 iops : min= 329, max= 598, avg=412.45, stdev=53.83, samples=20 00:12:37.751 lat (msec) : 10=0.10%, 20=0.10%, 50=1.27%, 100=1.55%, 250=96.18% 00:12:37.751 lat (msec) : 500=0.81% 00:12:37.751 cpu : usr=0.65%, sys=1.20%, ctx=4290, majf=0, minf=1 00:12:37.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:37.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.751 issued rwts: total=0,4188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.751 job3: (groupid=0, jobs=1): err= 0: pid=66791: Sun Sep 29 00:24:52 2024 00:12:37.751 write: IOPS=513, BW=128MiB/s (135MB/s)(1298MiB/10109msec); 0 zone resets 00:12:37.751 slat (usec): min=18, max=11622, avg=1895.85, stdev=3280.16 00:12:37.751 clat (msec): min=6, max=231, avg=122.61, stdev=12.52 00:12:37.751 lat (msec): min=6, max=231, avg=124.51, stdev=12.34 00:12:37.751 clat percentiles (msec): 00:12:37.751 | 1.00th=[ 61], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 120], 00:12:37.751 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:12:37.751 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:12:37.751 | 99.00th=[ 140], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:12:37.751 | 99.99th=[ 232] 00:12:37.751 bw ( KiB/s): min=126727, max=144384, per=8.75%, avg=131263.90, stdev=3554.55, samples=20 00:12:37.751 iops : min= 495, max= 564, avg=512.65, stdev=13.93, samples=20 00:12:37.751 lat (msec) : 10=0.10%, 20=0.04%, 50=0.54%, 100=1.66%, 250=97.67% 00:12:37.751 cpu : usr=0.79%, sys=1.38%, ctx=5301, majf=0, minf=1 00:12:37.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.751 issued rwts: total=0,5193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.751 job4: (groupid=0, jobs=1): err= 0: pid=66792: Sun Sep 29 00:24:52 2024 00:12:37.751 write: IOPS=678, BW=170MiB/s (178MB/s)(1711MiB/10087msec); 0 zone resets 00:12:37.751 slat (usec): min=14, max=46652, avg=1456.74, stdev=2531.41 00:12:37.751 clat (msec): min=48, max=173, avg=92.85, stdev=10.71 00:12:37.751 lat (msec): min=48, max=173, avg=94.30, stdev=10.57 00:12:37.751 clat percentiles (msec): 00:12:37.751 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:12:37.751 | 30.00th=[ 90], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:12:37.751 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 126], 00:12:37.751 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 169], 99.95th=[ 169], 00:12:37.751 | 99.99th=[ 174] 00:12:37.751 bw ( KiB/s): min=116736, max=183296, per=11.57%, avg=173593.60, stdev=17212.30, samples=20 00:12:37.751 iops : min= 456, max= 716, avg=678.10, stdev=67.24, samples=20 00:12:37.751 lat (msec) : 50=0.06%, 100=88.97%, 250=10.97% 00:12:37.751 cpu : usr=1.01%, sys=1.63%, ctx=9150, majf=0, minf=1 00:12:37.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:37.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,6844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 job5: (groupid=0, jobs=1): err= 0: pid=66797: Sun Sep 29 00:24:52 2024 00:12:37.752 write: IOPS=646, BW=162MiB/s (170MB/s)(1643MiB/10160msec); 0 zone resets 00:12:37.752 slat (usec): min=17, max=20864, avg=1504.03, stdev=2657.22 00:12:37.752 clat (msec): min=11, max=346, avg=97.38, stdev=26.58 00:12:37.752 lat (msec): min=11, max=347, avg=98.88, stdev=26.77 00:12:37.752 clat percentiles (msec): 00:12:37.752 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 89], 00:12:37.752 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 92], 60.00th=[ 93], 00:12:37.752 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 165], 00:12:37.752 | 99.00th=[ 203], 99.50th=[ 253], 99.90th=[ 326], 99.95th=[ 334], 00:12:37.752 | 99.99th=[ 347] 00:12:37.752 bw ( KiB/s): min=82432, max=180736, per=11.11%, avg=166656.00, stdev=27955.44, samples=20 00:12:37.752 iops : min= 322, max= 706, avg=651.00, stdev=109.20, samples=20 00:12:37.752 lat (msec) : 20=0.12%, 50=0.30%, 100=88.16%, 250=10.89%, 500=0.52% 00:12:37.752 cpu : usr=1.33%, sys=1.77%, ctx=7221, majf=0, minf=1 00:12:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,6573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 job6: (groupid=0, jobs=1): err= 0: pid=66800: Sun Sep 29 00:24:52 2024 00:12:37.752 write: IOPS=402, BW=101MiB/s (106MB/s)(1024MiB/10172msec); 0 zone resets 00:12:37.752 slat (usec): min=18, max=20879, avg=2436.18, stdev=4233.92 00:12:37.752 clat (msec): min=10, max=360, avg=156.39, stdev=24.18 00:12:37.752 lat (msec): min=10, max=360, avg=158.82, stdev=24.13 00:12:37.752 clat percentiles (msec): 00:12:37.752 | 1.00th=[ 70], 5.00th=[ 125], 10.00th=[ 136], 20.00th=[ 148], 00:12:37.752 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:37.752 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 190], 00:12:37.752 | 99.00th=[ 243], 99.50th=[ 300], 99.90th=[ 347], 99.95th=[ 347], 00:12:37.752 | 99.99th=[ 359] 00:12:37.752 bw ( KiB/s): min=82267, max=128000, per=6.88%, avg=103262.15, stdev=9273.11, samples=20 00:12:37.752 iops : min= 321, max= 500, avg=403.35, stdev=36.27, samples=20 00:12:37.752 lat (msec) : 20=0.15%, 50=0.59%, 100=0.68%, 250=97.66%, 500=0.93% 00:12:37.752 cpu : usr=0.69%, sys=1.06%, ctx=4253, majf=0, minf=1 00:12:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,4097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 job7: (groupid=0, jobs=1): err= 0: pid=66801: Sun Sep 29 00:24:52 2024 00:12:37.752 write: IOPS=677, BW=169MiB/s (177MB/s)(1707MiB/10082msec); 0 zone resets 00:12:37.752 slat (usec): min=16, max=79620, avg=1459.26, stdev=2643.85 00:12:37.752 clat (msec): min=81, max=202, avg=93.04, stdev=11.42 00:12:37.752 lat (msec): min=82, max=202, avg=94.50, stdev=11.29 00:12:37.752 clat percentiles (msec): 00:12:37.752 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:12:37.752 | 30.00th=[ 89], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:12:37.752 | 70.00th=[ 92], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 126], 00:12:37.752 | 99.00th=[ 130], 99.50th=[ 157], 99.90th=[ 194], 99.95th=[ 194], 00:12:37.752 | 99.99th=[ 203] 00:12:37.752 bw ( KiB/s): min=108327, max=182784, per=11.54%, avg=173121.95, stdev=18716.37, samples=20 00:12:37.752 iops : min= 423, max= 714, avg=676.25, stdev=73.14, samples=20 00:12:37.752 lat (msec) : 100=89.09%, 250=10.91% 00:12:37.752 cpu : usr=1.13%, sys=1.90%, ctx=8131, majf=0, minf=1 00:12:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,6826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 job8: (groupid=0, jobs=1): err= 0: pid=66802: Sun Sep 29 00:24:52 2024 00:12:37.752 write: IOPS=515, BW=129MiB/s (135MB/s)(1301MiB/10100msec); 0 zone resets 00:12:37.752 slat (usec): min=17, max=21020, avg=1870.64, stdev=3270.12 00:12:37.752 clat (msec): min=5, max=215, avg=122.29, stdev=14.65 00:12:37.752 lat (msec): min=5, max=215, avg=124.16, stdev=14.50 00:12:37.752 clat percentiles (msec): 00:12:37.752 | 1.00th=[ 24], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 120], 00:12:37.752 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 126], 60.00th=[ 126], 00:12:37.752 | 70.00th=[ 127], 80.00th=[ 127], 90.00th=[ 128], 95.00th=[ 129], 00:12:37.752 | 99.00th=[ 148], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 207], 00:12:37.752 | 99.99th=[ 215] 00:12:37.752 bw ( KiB/s): min=122880, max=144606, per=8.77%, avg=131582.00, stdev=4670.68, samples=20 00:12:37.752 iops : min= 480, max= 564, avg=513.90, stdev=18.13, samples=20 00:12:37.752 lat (msec) : 10=0.10%, 20=0.73%, 50=0.63%, 100=0.88%, 250=97.66% 00:12:37.752 cpu : usr=0.91%, sys=1.71%, ctx=6459, majf=0, minf=1 00:12:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,5204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 job9: (groupid=0, jobs=1): err= 0: pid=66803: Sun Sep 29 00:24:52 2024 00:12:37.752 write: IOPS=408, BW=102MiB/s (107MB/s)(1038MiB/10168msec); 0 zone resets 00:12:37.752 slat (usec): min=20, max=17862, avg=2378.27, stdev=4163.48 00:12:37.752 clat (msec): min=16, max=351, avg=154.29, stdev=24.20 00:12:37.752 lat (msec): min=16, max=351, avg=156.67, stdev=24.23 00:12:37.752 clat percentiles (msec): 00:12:37.752 | 1.00th=[ 72], 5.00th=[ 122], 10.00th=[ 127], 20.00th=[ 148], 00:12:37.752 | 30.00th=[ 150], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:37.752 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 188], 00:12:37.752 | 99.00th=[ 234], 99.50th=[ 292], 99.90th=[ 342], 99.95th=[ 342], 00:12:37.752 | 99.99th=[ 351] 00:12:37.752 bw ( KiB/s): min=83976, max=137728, per=6.97%, avg=104612.45, stdev=10999.29, samples=20 00:12:37.752 iops : min= 328, max= 538, avg=408.60, stdev=42.97, samples=20 00:12:37.752 lat (msec) : 20=0.10%, 50=0.48%, 100=1.47%, 250=97.13%, 500=0.82% 00:12:37.752 cpu : usr=0.69%, sys=1.02%, ctx=5207, majf=0, minf=1 00:12:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,4150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 job10: (groupid=0, jobs=1): err= 0: pid=66804: Sun Sep 29 00:24:52 2024 00:12:37.752 write: IOPS=407, BW=102MiB/s (107MB/s)(1035MiB/10167msec); 0 zone resets 00:12:37.752 slat (usec): min=17, max=21015, avg=2412.47, stdev=4177.82 00:12:37.752 clat (msec): min=10, max=352, avg=154.69, stdev=24.22 00:12:37.752 lat (msec): min=10, max=352, avg=157.11, stdev=24.19 00:12:37.752 clat percentiles (msec): 00:12:37.752 | 1.00th=[ 73], 5.00th=[ 120], 10.00th=[ 126], 20.00th=[ 148], 00:12:37.752 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:12:37.752 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 188], 00:12:37.752 | 99.00th=[ 234], 99.50th=[ 296], 99.90th=[ 342], 99.95th=[ 342], 00:12:37.752 | 99.99th=[ 355] 00:12:37.752 bw ( KiB/s): min=84136, max=133120, per=6.96%, avg=104354.00, stdev=10690.35, samples=20 00:12:37.752 iops : min= 328, max= 520, avg=407.60, stdev=41.82, samples=20 00:12:37.752 lat (msec) : 20=0.19%, 50=0.48%, 100=0.68%, 250=97.83%, 500=0.82% 00:12:37.752 cpu : usr=0.60%, sys=1.13%, ctx=5308, majf=0, minf=1 00:12:37.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:12:37.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:37.752 issued rwts: total=0,4140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.752 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:37.752 00:12:37.752 Run status group 0 (all jobs): 00:12:37.752 WRITE: bw=1465MiB/s (1536MB/s), 101MiB/s-181MiB/s (106MB/s-190MB/s), io=14.6GiB (15.6GB), run=10053-10174msec 00:12:37.752 00:12:37.752 Disk stats (read/write): 00:12:37.752 nvme0n1: ios=49/10102, merge=0/0, ticks=39/1212760, in_queue=1212799, util=97.73% 00:12:37.752 nvme10n1: ios=49/14408, merge=0/0, ticks=37/1218199, in_queue=1218236, util=97.91% 00:12:37.752 nvme1n1: ios=38/8249, merge=0/0, ticks=40/1211914, in_queue=1211954, util=98.13% 00:12:37.752 nvme2n1: ios=29/10251, merge=0/0, ticks=39/1214853, in_queue=1214892, util=98.14% 00:12:37.752 nvme3n1: ios=28/13537, merge=0/0, ticks=51/1215718, in_queue=1215769, util=98.18% 00:12:37.752 nvme4n1: ios=13/13005, merge=0/0, ticks=34/1208438, in_queue=1208472, util=98.11% 00:12:37.752 nvme5n1: ios=0/8071, merge=0/0, ticks=0/1211268, in_queue=1211268, util=98.43% 00:12:37.752 nvme6n1: ios=0/13494, merge=0/0, ticks=0/1214159, in_queue=1214159, util=98.28% 00:12:37.752 nvme7n1: ios=0/10275, merge=0/0, ticks=0/1215982, in_queue=1215982, util=98.74% 00:12:37.752 nvme8n1: ios=0/8168, merge=0/0, ticks=0/1210438, in_queue=1210438, util=98.76% 00:12:37.752 nvme9n1: ios=0/8146, merge=0/0, ticks=0/1209760, in_queue=1209760, util=98.86% 00:12:37.752 00:24:52 -- target/multiconnection.sh@36 -- # sync 00:12:37.752 00:24:52 -- target/multiconnection.sh@37 -- # seq 1 11 00:12:37.752 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.752 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.752 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:12:37.752 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.752 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.752 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:12:37.752 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.752 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:12:37.752 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:12:37.753 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:12:37.753 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:12:37.753 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:12:37.753 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:12:37.753 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:12:37.753 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:12:37.753 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:12:37.753 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:12:37.753 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:12:37.753 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:12:37.753 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:12:37.753 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:12:37.753 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:12:37.753 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:12:37.753 00:24:52 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:12:37.753 00:24:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:12:37.753 00:24:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:52 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:12:37.753 00:24:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:52 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:12:37.753 00:24:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:12:37.753 00:24:53 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:12:37.753 00:24:53 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:12:37.753 00:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:53 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:12:37.753 00:24:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:12:37.753 00:24:53 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:12:37.753 00:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:53 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:12:37.753 00:24:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:12:37.753 00:24:53 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:12:37.753 00:24:53 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:12:37.753 00:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:53 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:37.753 00:24:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:12:37.753 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:12:37.753 00:24:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:12:37.753 00:24:53 -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:37.753 00:24:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:12:37.753 00:24:53 -- common/autotest_common.sh@1210 -- # return 0 00:12:37.753 00:24:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:12:37.753 00:24:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.753 00:24:53 -- common/autotest_common.sh@10 -- # set +x 00:12:37.753 00:24:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.753 00:24:53 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:12:37.753 00:24:53 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:37.753 00:24:53 -- target/multiconnection.sh@47 -- # nvmftestfini 00:12:37.753 00:24:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:37.753 00:24:53 -- nvmf/common.sh@116 -- # sync 00:12:37.753 00:24:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:37.754 00:24:53 -- nvmf/common.sh@119 -- # set +e 00:12:37.754 00:24:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:37.754 00:24:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:37.754 rmmod nvme_tcp 00:12:37.754 rmmod nvme_fabrics 00:12:37.754 rmmod nvme_keyring 00:12:37.754 00:24:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:37.754 00:24:53 -- nvmf/common.sh@123 -- # set -e 00:12:37.754 00:24:53 -- nvmf/common.sh@124 -- # return 0 00:12:37.754 00:24:53 -- nvmf/common.sh@477 -- # '[' -n 66107 ']' 00:12:37.754 00:24:53 -- nvmf/common.sh@478 -- # killprocess 66107 00:12:37.754 00:24:53 -- common/autotest_common.sh@926 -- # '[' -z 66107 ']' 00:12:37.754 00:24:53 -- common/autotest_common.sh@930 -- # kill -0 66107 00:12:37.754 00:24:53 -- common/autotest_common.sh@931 -- # uname 00:12:37.754 00:24:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:37.754 00:24:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66107 00:12:37.754 killing process with pid 66107 00:12:37.754 00:24:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:37.754 00:24:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:37.754 00:24:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66107' 00:12:37.754 00:24:53 -- common/autotest_common.sh@945 -- # kill 66107 00:12:37.754 00:24:53 -- common/autotest_common.sh@950 -- # wait 66107 00:12:38.013 00:24:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:38.013 00:24:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:38.013 00:24:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:38.013 00:24:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.013 00:24:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:38.013 00:24:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.013 00:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.013 00:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.013 00:24:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:38.013 ************************************ 00:12:38.013 END TEST nvmf_multiconnection 00:12:38.013 ************************************ 00:12:38.013 00:12:38.013 real 0m48.804s 00:12:38.013 user 2m39.417s 00:12:38.013 sys 0m34.909s 00:12:38.013 00:24:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.013 00:24:53 -- common/autotest_common.sh@10 -- # set +x 00:12:38.013 00:24:53 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:38.013 00:24:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:38.013 00:24:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:38.013 00:24:53 -- common/autotest_common.sh@10 -- # set +x 00:12:38.013 ************************************ 00:12:38.013 START TEST nvmf_initiator_timeout 00:12:38.013 ************************************ 00:12:38.013 00:24:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:38.272 * Looking for test storage... 00:12:38.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:38.272 00:24:53 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:38.272 00:24:53 -- nvmf/common.sh@7 -- # uname -s 00:12:38.272 00:24:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.272 00:24:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.272 00:24:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.272 00:24:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.272 00:24:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.272 00:24:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.272 00:24:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.272 00:24:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.272 00:24:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.272 00:24:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.272 00:24:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:12:38.272 00:24:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:12:38.272 00:24:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.272 00:24:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.272 00:24:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:38.272 00:24:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:38.272 00:24:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.272 00:24:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.272 00:24:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.272 00:24:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.272 00:24:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.272 00:24:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.272 00:24:53 -- paths/export.sh@5 -- # export PATH 00:12:38.272 00:24:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.272 00:24:53 -- nvmf/common.sh@46 -- # : 0 00:12:38.272 00:24:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:38.272 00:24:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:38.272 00:24:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:38.272 00:24:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.272 00:24:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.272 00:24:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:38.272 00:24:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:38.272 00:24:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:38.272 00:24:53 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:38.272 00:24:53 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:38.272 00:24:53 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:12:38.272 00:24:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:38.272 00:24:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.272 00:24:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:38.272 00:24:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:38.272 00:24:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:38.272 00:24:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.272 00:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.272 00:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.272 00:24:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:38.272 00:24:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:38.272 00:24:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:38.272 00:24:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:38.272 00:24:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:38.272 00:24:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:38.272 00:24:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.272 00:24:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.272 00:24:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:38.272 00:24:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:38.272 00:24:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:38.272 00:24:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:38.272 00:24:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:38.272 00:24:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.272 00:24:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:38.272 00:24:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:38.272 00:24:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:38.272 00:24:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:38.272 00:24:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:38.272 00:24:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:38.272 Cannot find device "nvmf_tgt_br" 00:12:38.272 00:24:53 -- nvmf/common.sh@154 -- # true 00:12:38.272 00:24:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:38.272 Cannot find device "nvmf_tgt_br2" 00:12:38.272 00:24:53 -- nvmf/common.sh@155 -- # true 00:12:38.272 00:24:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:38.272 00:24:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:38.272 Cannot find device "nvmf_tgt_br" 00:12:38.273 00:24:54 -- nvmf/common.sh@157 -- # true 00:12:38.273 00:24:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:38.273 Cannot find device "nvmf_tgt_br2" 00:12:38.273 00:24:54 -- nvmf/common.sh@158 -- # true 00:12:38.273 00:24:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:38.273 00:24:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:38.273 00:24:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:38.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.273 00:24:54 -- nvmf/common.sh@161 -- # true 00:12:38.273 00:24:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:38.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:38.273 00:24:54 -- nvmf/common.sh@162 -- # true 00:12:38.273 00:24:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:38.273 00:24:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:38.273 00:24:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:38.273 00:24:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:38.273 00:24:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:38.532 00:24:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:38.532 00:24:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:38.532 00:24:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:38.532 00:24:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:38.532 00:24:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:38.532 00:24:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:38.532 00:24:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:38.532 00:24:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:38.532 00:24:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:38.532 00:24:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:38.532 00:24:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:38.532 00:24:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:38.532 00:24:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:38.532 00:24:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:38.532 00:24:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:38.532 00:24:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:38.532 00:24:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:38.532 00:24:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:38.532 00:24:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:38.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:38.532 00:12:38.532 --- 10.0.0.2 ping statistics --- 00:12:38.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.532 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:38.532 00:24:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:38.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:38.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:12:38.532 00:12:38.532 --- 10.0.0.3 ping statistics --- 00:12:38.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.532 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:38.532 00:24:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:38.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:38.532 00:12:38.532 --- 10.0.0.1 ping statistics --- 00:12:38.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.532 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:38.532 00:24:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.532 00:24:54 -- nvmf/common.sh@421 -- # return 0 00:12:38.532 00:24:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:38.532 00:24:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.532 00:24:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:38.532 00:24:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:38.532 00:24:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.532 00:24:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:38.532 00:24:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:38.532 00:24:54 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:12:38.532 00:24:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:38.532 00:24:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:38.532 00:24:54 -- common/autotest_common.sh@10 -- # set +x 00:12:38.532 00:24:54 -- nvmf/common.sh@469 -- # nvmfpid=67169 00:12:38.532 00:24:54 -- nvmf/common.sh@470 -- # waitforlisten 67169 00:12:38.532 00:24:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.532 00:24:54 -- common/autotest_common.sh@819 -- # '[' -z 67169 ']' 00:12:38.532 00:24:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.532 00:24:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:38.532 00:24:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.532 00:24:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:38.532 00:24:54 -- common/autotest_common.sh@10 -- # set +x 00:12:38.532 [2024-09-29 00:24:54.338590] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:12:38.532 [2024-09-29 00:24:54.338688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.791 [2024-09-29 00:24:54.472917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.791 [2024-09-29 00:24:54.526841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:38.791 [2024-09-29 00:24:54.526983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.791 [2024-09-29 00:24:54.526997] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.791 [2024-09-29 00:24:54.527005] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.791 [2024-09-29 00:24:54.527178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.791 [2024-09-29 00:24:54.527843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.791 [2024-09-29 00:24:54.527924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.791 [2024-09-29 00:24:54.527934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.726 00:24:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:39.726 00:24:55 -- common/autotest_common.sh@852 -- # return 0 00:12:39.726 00:24:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:39.726 00:24:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 00:24:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:39.726 00:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 Malloc0 00:12:39.726 00:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:12:39.726 00:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 Delay0 00:12:39.726 00:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.726 00:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 [2024-09-29 00:24:55.429419] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.726 00:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:39.726 00:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 00:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.726 00:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 00:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.726 00:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.726 00:24:55 -- common/autotest_common.sh@10 -- # set +x 00:12:39.726 [2024-09-29 00:24:55.457592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.726 00:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.726 00:24:55 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.985 00:24:55 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.985 00:24:55 -- common/autotest_common.sh@1177 -- # local i=0 00:12:39.985 00:24:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.985 00:24:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:39.985 00:24:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:41.888 00:24:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:41.888 00:24:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:41.888 00:24:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.888 00:24:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:41.888 00:24:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.888 00:24:57 -- common/autotest_common.sh@1187 -- # return 0 00:12:41.888 00:24:57 -- target/initiator_timeout.sh@35 -- # fio_pid=67233 00:12:41.888 00:24:57 -- target/initiator_timeout.sh@37 -- # sleep 3 00:12:41.888 00:24:57 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:12:41.888 [global] 00:12:41.888 thread=1 00:12:41.888 invalidate=1 00:12:41.888 rw=write 00:12:41.888 time_based=1 00:12:41.888 runtime=60 00:12:41.888 ioengine=libaio 00:12:41.888 direct=1 00:12:41.888 bs=4096 00:12:41.888 iodepth=1 00:12:41.888 norandommap=0 00:12:41.888 numjobs=1 00:12:41.888 00:12:41.888 verify_dump=1 00:12:41.888 verify_backlog=512 00:12:41.888 verify_state_save=0 00:12:41.888 do_verify=1 00:12:41.888 verify=crc32c-intel 00:12:41.888 [job0] 00:12:41.888 filename=/dev/nvme0n1 00:12:41.888 Could not set queue depth (nvme0n1) 00:12:42.147 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:42.147 fio-3.35 00:12:42.147 Starting 1 thread 00:12:45.428 00:25:00 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:12:45.428 00:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.428 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:12:45.428 true 00:12:45.428 00:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.428 00:25:00 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:12:45.428 00:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.428 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:12:45.428 true 00:12:45.428 00:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.428 00:25:00 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:12:45.428 00:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.428 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:12:45.428 true 00:12:45.428 00:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.428 00:25:00 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:12:45.428 00:25:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.428 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:12:45.428 true 00:12:45.428 00:25:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.428 00:25:00 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:47.957 00:25:03 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:47.957 00:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.957 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.957 true 00:12:47.957 00:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.957 00:25:03 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:47.957 00:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.957 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.957 true 00:12:47.957 00:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.957 00:25:03 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:47.957 00:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.957 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.957 true 00:12:47.957 00:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.957 00:25:03 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:47.957 00:25:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.957 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:12:47.957 true 00:12:47.957 00:25:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.957 00:25:03 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:47.957 00:25:03 -- target/initiator_timeout.sh@54 -- # wait 67233 00:13:44.249 00:13:44.249 job0: (groupid=0, jobs=1): err= 0: pid=67254: Sun Sep 29 00:25:57 2024 00:13:44.249 read: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec) 00:13:44.249 slat (usec): min=10, max=12002, avg=13.60, stdev=64.37 00:13:44.249 clat (usec): min=155, max=40434k, avg=1025.96, stdev=182377.16 00:13:44.249 lat (usec): min=166, max=40434k, avg=1039.56, stdev=182377.17 00:13:44.249 clat percentiles (usec): 00:13:44.249 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:13:44.249 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:13:44.249 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 245], 00:13:44.249 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 322], 99.95th=[ 469], 00:13:44.249 | 99.99th=[ 1516] 00:13:44.249 write: IOPS=825, BW=3304KiB/s (3383kB/s)(194MiB/60000msec); 0 zone resets 00:13:44.249 slat (usec): min=12, max=495, avg=19.44, stdev= 6.20 00:13:44.249 clat (usec): min=15, max=7803, avg=157.04, stdev=43.36 00:13:44.249 lat (usec): min=134, max=7828, avg=176.48, stdev=44.16 00:13:44.249 clat percentiles (usec): 00:13:44.249 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 141], 00:13:44.249 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:13:44.249 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 194], 00:13:44.249 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 262], 99.95th=[ 375], 00:13:44.249 | 99.99th=[ 1045] 00:13:44.249 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=9923.69, stdev=1595.28, samples=39 00:13:44.249 iops : min= 1024, max= 3072, avg=2480.92, stdev=398.82, samples=39 00:13:44.249 lat (usec) : 20=0.01%, 250=98.29%, 500=1.66%, 750=0.02%, 1000=0.01% 00:13:44.249 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:13:44.249 cpu : usr=0.61%, sys=2.11%, ctx=98738, majf=0, minf=5 00:13:44.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.249 issued rwts: total=49152,49555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:44.249 00:13:44.249 Run status group 0 (all jobs): 00:13:44.249 READ: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:13:44.249 WRITE: bw=3304KiB/s (3383kB/s), 3304KiB/s-3304KiB/s (3383kB/s-3383kB/s), io=194MiB (203MB), run=60000-60000msec 00:13:44.249 00:13:44.249 Disk stats (read/write): 00:13:44.249 nvme0n1: ios=49268/49152, merge=0/0, ticks=10376/8285, in_queue=18661, util=99.86% 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.249 00:25:57 -- common/autotest_common.sh@1198 -- # local i=0 00:13:44.249 00:25:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:44.249 00:25:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.249 00:25:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.249 00:25:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:44.249 00:25:57 -- common/autotest_common.sh@1210 -- # return 0 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:13:44.249 nvmf hotplug test: fio successful as expected 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.249 00:25:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.249 00:25:57 -- common/autotest_common.sh@10 -- # set +x 00:13:44.249 00:25:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:13:44.249 00:25:57 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:13:44.249 00:25:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:44.249 00:25:57 -- nvmf/common.sh@116 -- # sync 00:13:44.249 00:25:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:44.249 00:25:58 -- nvmf/common.sh@119 -- # set +e 00:13:44.249 00:25:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:44.249 00:25:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:44.249 rmmod nvme_tcp 00:13:44.249 rmmod nvme_fabrics 00:13:44.249 rmmod nvme_keyring 00:13:44.249 00:25:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:44.249 00:25:58 -- nvmf/common.sh@123 -- # set -e 00:13:44.249 00:25:58 -- nvmf/common.sh@124 -- # return 0 00:13:44.249 00:25:58 -- nvmf/common.sh@477 -- # '[' -n 67169 ']' 00:13:44.249 00:25:58 -- nvmf/common.sh@478 -- # killprocess 67169 00:13:44.249 00:25:58 -- common/autotest_common.sh@926 -- # '[' -z 67169 ']' 00:13:44.249 00:25:58 -- common/autotest_common.sh@930 -- # kill -0 67169 00:13:44.249 00:25:58 -- common/autotest_common.sh@931 -- # uname 00:13:44.249 00:25:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.249 00:25:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67169 00:13:44.249 killing process with pid 67169 00:13:44.249 00:25:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:44.249 00:25:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:44.249 00:25:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67169' 00:13:44.249 00:25:58 -- common/autotest_common.sh@945 -- # kill 67169 00:13:44.249 00:25:58 -- common/autotest_common.sh@950 -- # wait 67169 00:13:44.249 00:25:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.249 00:25:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.249 00:25:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.249 00:25:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.249 00:25:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.249 00:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.249 00:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.249 00:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.249 00:25:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:44.249 00:13:44.249 real 1m4.511s 00:13:44.249 user 3m53.478s 00:13:44.249 sys 0m21.663s 00:13:44.249 00:25:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.249 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.249 ************************************ 00:13:44.249 END TEST nvmf_initiator_timeout 00:13:44.249 ************************************ 00:13:44.249 00:25:58 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:13:44.249 00:25:58 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:44.249 00:25:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:44.249 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.249 00:25:58 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:44.249 00:25:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:44.249 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.249 00:25:58 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:44.249 00:25:58 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:44.249 00:25:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:44.249 00:25:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.249 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.249 ************************************ 00:13:44.249 START TEST nvmf_identify 00:13:44.249 ************************************ 00:13:44.249 00:25:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:44.249 * Looking for test storage... 00:13:44.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:44.249 00:25:58 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.249 00:25:58 -- nvmf/common.sh@7 -- # uname -s 00:13:44.249 00:25:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.249 00:25:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.249 00:25:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.249 00:25:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.249 00:25:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.249 00:25:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.249 00:25:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.249 00:25:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.249 00:25:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.249 00:25:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.250 00:25:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:13:44.250 00:25:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:13:44.250 00:25:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.250 00:25:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.250 00:25:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.250 00:25:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.250 00:25:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.250 00:25:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.250 00:25:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.250 00:25:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.250 00:25:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.250 00:25:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.250 00:25:58 -- paths/export.sh@5 -- # export PATH 00:13:44.250 00:25:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.250 00:25:58 -- nvmf/common.sh@46 -- # : 0 00:13:44.250 00:25:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:44.250 00:25:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:44.250 00:25:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:44.250 00:25:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.250 00:25:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.250 00:25:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:44.250 00:25:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:44.250 00:25:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:44.250 00:25:58 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.250 00:25:58 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.250 00:25:58 -- host/identify.sh@14 -- # nvmftestinit 00:13:44.250 00:25:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:44.250 00:25:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.250 00:25:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:44.250 00:25:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:44.250 00:25:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:44.250 00:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.250 00:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.250 00:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.250 00:25:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:44.250 00:25:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:44.250 00:25:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:44.250 00:25:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:44.250 00:25:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:44.250 00:25:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:44.250 00:25:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.250 00:25:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.250 00:25:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.250 00:25:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:44.250 00:25:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.250 00:25:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.250 00:25:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.250 00:25:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.250 00:25:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.250 00:25:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.250 00:25:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.250 00:25:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.250 00:25:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:44.250 00:25:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:44.250 Cannot find device "nvmf_tgt_br" 00:13:44.250 00:25:58 -- nvmf/common.sh@154 -- # true 00:13:44.250 00:25:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.250 Cannot find device "nvmf_tgt_br2" 00:13:44.250 00:25:58 -- nvmf/common.sh@155 -- # true 00:13:44.250 00:25:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:44.250 00:25:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:44.250 Cannot find device "nvmf_tgt_br" 00:13:44.250 00:25:58 -- nvmf/common.sh@157 -- # true 00:13:44.250 00:25:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:44.250 Cannot find device "nvmf_tgt_br2" 00:13:44.250 00:25:58 -- nvmf/common.sh@158 -- # true 00:13:44.250 00:25:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:44.250 00:25:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:44.250 00:25:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.250 00:25:58 -- nvmf/common.sh@161 -- # true 00:13:44.250 00:25:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.250 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.250 00:25:58 -- nvmf/common.sh@162 -- # true 00:13:44.250 00:25:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.250 00:25:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.250 00:25:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.250 00:25:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.250 00:25:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.250 00:25:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.250 00:25:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.250 00:25:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.250 00:25:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.250 00:25:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:44.250 00:25:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:44.250 00:25:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:44.250 00:25:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:44.250 00:25:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.250 00:25:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.250 00:25:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.250 00:25:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:44.250 00:25:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:44.250 00:25:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.250 00:25:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.250 00:25:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.250 00:25:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.250 00:25:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.250 00:25:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:44.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:13:44.250 00:13:44.250 --- 10.0.0.2 ping statistics --- 00:13:44.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.250 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:44.250 00:25:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:44.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:44.250 00:13:44.250 --- 10.0.0.3 ping statistics --- 00:13:44.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.250 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:44.250 00:25:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:44.250 00:13:44.250 --- 10.0.0.1 ping statistics --- 00:13:44.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.250 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:44.250 00:25:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.250 00:25:58 -- nvmf/common.sh@421 -- # return 0 00:13:44.250 00:25:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.250 00:25:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.251 00:25:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.251 00:25:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.251 00:25:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.251 00:25:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.251 00:25:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.251 00:25:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:44.251 00:25:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:44.251 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 00:25:58 -- host/identify.sh@19 -- # nvmfpid=68093 00:13:44.251 00:25:58 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.251 00:25:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:44.251 00:25:58 -- host/identify.sh@23 -- # waitforlisten 68093 00:13:44.251 00:25:58 -- common/autotest_common.sh@819 -- # '[' -z 68093 ']' 00:13:44.251 00:25:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.251 00:25:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.251 00:25:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.251 00:25:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.251 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 [2024-09-29 00:25:58.946589] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:44.251 [2024-09-29 00:25:58.946841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.251 [2024-09-29 00:25:59.079795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.251 [2024-09-29 00:25:59.131816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.251 [2024-09-29 00:25:59.132184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.251 [2024-09-29 00:25:59.132246] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.251 [2024-09-29 00:25:59.132515] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.251 [2024-09-29 00:25:59.132857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.251 [2024-09-29 00:25:59.133120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.251 [2024-09-29 00:25:59.132986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.251 [2024-09-29 00:25:59.133117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.251 00:25:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:44.251 00:25:59 -- common/autotest_common.sh@852 -- # return 0 00:13:44.251 00:25:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:44.251 00:25:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:25:59 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 [2024-09-29 00:25:59.978698] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.251 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.251 00:26:00 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:44.251 00:26:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 00:26:00 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:44.251 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 Malloc0 00:13:44.251 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.251 00:26:00 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:44.251 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.251 00:26:00 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:44.251 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.251 00:26:00 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.251 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 [2024-09-29 00:26:00.082928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.251 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.251 00:26:00 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.251 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.251 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.251 00:26:00 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:44.251 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.251 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.510 [2024-09-29 00:26:00.098696] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:44.510 [ 00:13:44.510 { 00:13:44.510 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.510 "subtype": "Discovery", 00:13:44.510 "listen_addresses": [ 00:13:44.510 { 00:13:44.510 "transport": "TCP", 00:13:44.510 "trtype": "TCP", 00:13:44.510 "adrfam": "IPv4", 00:13:44.510 "traddr": "10.0.0.2", 00:13:44.510 "trsvcid": "4420" 00:13:44.510 } 00:13:44.510 ], 00:13:44.510 "allow_any_host": true, 00:13:44.510 "hosts": [] 00:13:44.510 }, 00:13:44.510 { 00:13:44.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.510 "subtype": "NVMe", 00:13:44.510 "listen_addresses": [ 00:13:44.510 { 00:13:44.510 "transport": "TCP", 00:13:44.510 "trtype": "TCP", 00:13:44.510 "adrfam": "IPv4", 00:13:44.510 "traddr": "10.0.0.2", 00:13:44.510 "trsvcid": "4420" 00:13:44.510 } 00:13:44.510 ], 00:13:44.510 "allow_any_host": true, 00:13:44.510 "hosts": [], 00:13:44.510 "serial_number": "SPDK00000000000001", 00:13:44.510 "model_number": "SPDK bdev Controller", 00:13:44.510 "max_namespaces": 32, 00:13:44.510 "min_cntlid": 1, 00:13:44.510 "max_cntlid": 65519, 00:13:44.510 "namespaces": [ 00:13:44.510 { 00:13:44.510 "nsid": 1, 00:13:44.510 "bdev_name": "Malloc0", 00:13:44.510 "name": "Malloc0", 00:13:44.510 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:44.510 "eui64": "ABCDEF0123456789", 00:13:44.510 "uuid": "4ee90241-7d43-4f40-88d6-bf7a45173e4d" 00:13:44.510 } 00:13:44.510 ] 00:13:44.510 } 00:13:44.510 ] 00:13:44.510 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.510 00:26:00 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:44.510 [2024-09-29 00:26:00.138102] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:44.510 [2024-09-29 00:26:00.138302] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68128 ] 00:13:44.510 [2024-09-29 00:26:00.276928] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:44.510 [2024-09-29 00:26:00.277008] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:44.510 [2024-09-29 00:26:00.277019] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:44.510 [2024-09-29 00:26:00.277031] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:44.510 [2024-09-29 00:26:00.277044] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:44.510 [2024-09-29 00:26:00.277192] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:44.510 [2024-09-29 00:26:00.277252] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc11d30 0 00:13:44.510 [2024-09-29 00:26:00.289400] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:44.510 [2024-09-29 00:26:00.289426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:44.510 [2024-09-29 00:26:00.289432] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:44.510 [2024-09-29 00:26:00.289436] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:44.510 [2024-09-29 00:26:00.289498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.510 [2024-09-29 00:26:00.289510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.510 [2024-09-29 00:26:00.289515] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.510 [2024-09-29 00:26:00.289530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:44.510 [2024-09-29 00:26:00.289564] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.510 [2024-09-29 00:26:00.297424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.510 [2024-09-29 00:26:00.297445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.510 [2024-09-29 00:26:00.297451] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.510 [2024-09-29 00:26:00.297456] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.510 [2024-09-29 00:26:00.297472] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:44.510 [2024-09-29 00:26:00.297481] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:44.510 [2024-09-29 00:26:00.297488] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:44.510 [2024-09-29 00:26:00.297517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.510 [2024-09-29 00:26:00.297527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.510 [2024-09-29 00:26:00.297531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.510 [2024-09-29 00:26:00.297542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.510 [2024-09-29 00:26:00.297572] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.510 [2024-09-29 00:26:00.297640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.510 [2024-09-29 00:26:00.297648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.510 [2024-09-29 00:26:00.297652] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.510 [2024-09-29 00:26:00.297656] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.510 [2024-09-29 00:26:00.297662] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:44.510 [2024-09-29 00:26:00.297671] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:44.510 [2024-09-29 00:26:00.297679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.297696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.297715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.297773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.297780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.297784] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.297795] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:44.511 [2024-09-29 00:26:00.297804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.511 [2024-09-29 00:26:00.297812] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.297829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.297847] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.297900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.297907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.297911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.297922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.511 [2024-09-29 00:26:00.297933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.297942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.297950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.297968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.298014] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.298021] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.298025] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298029] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.298035] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:44.511 [2024-09-29 00:26:00.298040] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:44.511 [2024-09-29 00:26:00.298049] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.511 [2024-09-29 00:26:00.298155] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:44.511 [2024-09-29 00:26:00.298166] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.511 [2024-09-29 00:26:00.298177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298182] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298186] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.298214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.298278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.298285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.298289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.298299] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.511 [2024-09-29 00:26:00.298310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298319] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.298360] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.298414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.298421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.298425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.298435] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.511 [2024-09-29 00:26:00.298440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:44.511 [2024-09-29 00:26:00.298449] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:44.511 [2024-09-29 00:26:00.298478] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.511 [2024-09-29 00:26:00.298494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298499] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298503] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.298537] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.298639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.511 [2024-09-29 00:26:00.298659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.511 [2024-09-29 00:26:00.298664] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298668] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11d30): datao=0, datal=4096, cccid=0 00:13:44.511 [2024-09-29 00:26:00.298673] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6ff30) on tqpair(0xc11d30): expected_datao=0, payload_size=4096 00:13:44.511 [2024-09-29 00:26:00.298684] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298689] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.298705] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.298709] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.298724] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:44.511 [2024-09-29 00:26:00.298730] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:44.511 [2024-09-29 00:26:00.298734] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:44.511 [2024-09-29 00:26:00.298740] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:44.511 [2024-09-29 00:26:00.298746] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:44.511 [2024-09-29 00:26:00.298751] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:44.511 [2024-09-29 00:26:00.298765] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.511 [2024-09-29 00:26:00.298774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.511 [2024-09-29 00:26:00.298813] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.298876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.298884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.298888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ff30) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.298901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298906] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.511 [2024-09-29 00:26:00.298924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.511 [2024-09-29 00:26:00.298946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.511 [2024-09-29 00:26:00.298967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.298975] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.298982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.511 [2024-09-29 00:26:00.298987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.511 [2024-09-29 00:26:00.299000] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.511 [2024-09-29 00:26:00.299008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.299024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.299045] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ff30, cid 0, qid 0 00:13:44.511 [2024-09-29 00:26:00.299053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70090, cid 1, qid 0 00:13:44.511 [2024-09-29 00:26:00.299058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc701f0, cid 2, qid 0 00:13:44.511 [2024-09-29 00:26:00.299063] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.511 [2024-09-29 00:26:00.299068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc704b0, cid 4, qid 0 00:13:44.511 [2024-09-29 00:26:00.299167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.299180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.299184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc704b0) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.299196] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:44.511 [2024-09-29 00:26:00.299202] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:44.511 [2024-09-29 00:26:00.299215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299224] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.299232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.299251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc704b0, cid 4, qid 0 00:13:44.511 [2024-09-29 00:26:00.299311] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.511 [2024-09-29 00:26:00.299319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.511 [2024-09-29 00:26:00.299323] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299327] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11d30): datao=0, datal=4096, cccid=4 00:13:44.511 [2024-09-29 00:26:00.299345] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc704b0) on tqpair(0xc11d30): expected_datao=0, payload_size=4096 00:13:44.511 [2024-09-29 00:26:00.299355] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299359] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.299376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.299380] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc704b0) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.299400] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:44.511 [2024-09-29 00:26:00.299442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299457] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.299465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.299474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.299490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.511 [2024-09-29 00:26:00.299519] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc704b0, cid 4, qid 0 00:13:44.511 [2024-09-29 00:26:00.299533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70610, cid 5, qid 0 00:13:44.511 [2024-09-29 00:26:00.299645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.511 [2024-09-29 00:26:00.299656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.511 [2024-09-29 00:26:00.299661] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299665] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11d30): datao=0, datal=1024, cccid=4 00:13:44.511 [2024-09-29 00:26:00.299670] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc704b0) on tqpair(0xc11d30): expected_datao=0, payload_size=1024 00:13:44.511 [2024-09-29 00:26:00.299678] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299683] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.299696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.299700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70610) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.299723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.299730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.299734] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299739] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc704b0) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.299756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.299773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.299798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc704b0, cid 4, qid 0 00:13:44.511 [2024-09-29 00:26:00.299869] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.511 [2024-09-29 00:26:00.299876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.511 [2024-09-29 00:26:00.299880] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299885] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11d30): datao=0, datal=3072, cccid=4 00:13:44.511 [2024-09-29 00:26:00.299890] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc704b0) on tqpair(0xc11d30): expected_datao=0, payload_size=3072 00:13:44.511 [2024-09-29 00:26:00.299898] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299902] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.511 [2024-09-29 00:26:00.299918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.511 [2024-09-29 00:26:00.299922] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299926] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc704b0) on tqpair=0xc11d30 00:13:44.511 [2024-09-29 00:26:00.299936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.299946] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc11d30) 00:13:44.511 [2024-09-29 00:26:00.299953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.511 [2024-09-29 00:26:00.299976] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc704b0, cid 4, qid 0 00:13:44.511 [2024-09-29 00:26:00.300050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.511 [2024-09-29 00:26:00.300062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.511 [2024-09-29 00:26:00.300066] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.511 [2024-09-29 00:26:00.300071] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc11d30): datao=0, datal=8, cccid=4 00:13:44.512 [2024-09-29 00:26:00.300076] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc704b0) on tqpair(0xc11d30): expected_datao=0, payload_size=8 00:13:44.512 [2024-09-29 00:26:00.300084] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300088] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.300111] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.300115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300119] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc704b0) on tqpair=0xc11d30 00:13:44.512 ===================================================== 00:13:44.512 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:44.512 ===================================================== 00:13:44.512 Controller Capabilities/Features 00:13:44.512 ================================ 00:13:44.512 Vendor ID: 0000 00:13:44.512 Subsystem Vendor ID: 0000 00:13:44.512 Serial Number: .................... 00:13:44.512 Model Number: ........................................ 00:13:44.512 Firmware Version: 24.01.1 00:13:44.512 Recommended Arb Burst: 0 00:13:44.512 IEEE OUI Identifier: 00 00 00 00:13:44.512 Multi-path I/O 00:13:44.512 May have multiple subsystem ports: No 00:13:44.512 May have multiple controllers: No 00:13:44.512 Associated with SR-IOV VF: No 00:13:44.512 Max Data Transfer Size: 131072 00:13:44.512 Max Number of Namespaces: 0 00:13:44.512 Max Number of I/O Queues: 1024 00:13:44.512 NVMe Specification Version (VS): 1.3 00:13:44.512 NVMe Specification Version (Identify): 1.3 00:13:44.512 Maximum Queue Entries: 128 00:13:44.512 Contiguous Queues Required: Yes 00:13:44.512 Arbitration Mechanisms Supported 00:13:44.512 Weighted Round Robin: Not Supported 00:13:44.512 Vendor Specific: Not Supported 00:13:44.512 Reset Timeout: 15000 ms 00:13:44.512 Doorbell Stride: 4 bytes 00:13:44.512 NVM Subsystem Reset: Not Supported 00:13:44.512 Command Sets Supported 00:13:44.512 NVM Command Set: Supported 00:13:44.512 Boot Partition: Not Supported 00:13:44.512 Memory Page Size Minimum: 4096 bytes 00:13:44.512 Memory Page Size Maximum: 4096 bytes 00:13:44.512 Persistent Memory Region: Not Supported 00:13:44.512 Optional Asynchronous Events Supported 00:13:44.512 Namespace Attribute Notices: Not Supported 00:13:44.512 Firmware Activation Notices: Not Supported 00:13:44.512 ANA Change Notices: Not Supported 00:13:44.512 PLE Aggregate Log Change Notices: Not Supported 00:13:44.512 LBA Status Info Alert Notices: Not Supported 00:13:44.512 EGE Aggregate Log Change Notices: Not Supported 00:13:44.512 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.512 Zone Descriptor Change Notices: Not Supported 00:13:44.512 Discovery Log Change Notices: Supported 00:13:44.512 Controller Attributes 00:13:44.512 128-bit Host Identifier: Not Supported 00:13:44.512 Non-Operational Permissive Mode: Not Supported 00:13:44.512 NVM Sets: Not Supported 00:13:44.512 Read Recovery Levels: Not Supported 00:13:44.512 Endurance Groups: Not Supported 00:13:44.512 Predictable Latency Mode: Not Supported 00:13:44.512 Traffic Based Keep ALive: Not Supported 00:13:44.512 Namespace Granularity: Not Supported 00:13:44.512 SQ Associations: Not Supported 00:13:44.512 UUID List: Not Supported 00:13:44.512 Multi-Domain Subsystem: Not Supported 00:13:44.512 Fixed Capacity Management: Not Supported 00:13:44.512 Variable Capacity Management: Not Supported 00:13:44.512 Delete Endurance Group: Not Supported 00:13:44.512 Delete NVM Set: Not Supported 00:13:44.512 Extended LBA Formats Supported: Not Supported 00:13:44.512 Flexible Data Placement Supported: Not Supported 00:13:44.512 00:13:44.512 Controller Memory Buffer Support 00:13:44.512 ================================ 00:13:44.512 Supported: No 00:13:44.512 00:13:44.512 Persistent Memory Region Support 00:13:44.512 ================================ 00:13:44.512 Supported: No 00:13:44.512 00:13:44.512 Admin Command Set Attributes 00:13:44.512 ============================ 00:13:44.512 Security Send/Receive: Not Supported 00:13:44.512 Format NVM: Not Supported 00:13:44.512 Firmware Activate/Download: Not Supported 00:13:44.512 Namespace Management: Not Supported 00:13:44.512 Device Self-Test: Not Supported 00:13:44.512 Directives: Not Supported 00:13:44.512 NVMe-MI: Not Supported 00:13:44.512 Virtualization Management: Not Supported 00:13:44.512 Doorbell Buffer Config: Not Supported 00:13:44.512 Get LBA Status Capability: Not Supported 00:13:44.512 Command & Feature Lockdown Capability: Not Supported 00:13:44.512 Abort Command Limit: 1 00:13:44.512 Async Event Request Limit: 4 00:13:44.512 Number of Firmware Slots: N/A 00:13:44.512 Firmware Slot 1 Read-Only: N/A 00:13:44.512 Firmware Activation Without Reset: N/A 00:13:44.512 Multiple Update Detection Support: N/A 00:13:44.512 Firmware Update Granularity: No Information Provided 00:13:44.512 Per-Namespace SMART Log: No 00:13:44.512 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.512 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:44.512 Command Effects Log Page: Not Supported 00:13:44.512 Get Log Page Extended Data: Supported 00:13:44.512 Telemetry Log Pages: Not Supported 00:13:44.512 Persistent Event Log Pages: Not Supported 00:13:44.512 Supported Log Pages Log Page: May Support 00:13:44.512 Commands Supported & Effects Log Page: Not Supported 00:13:44.512 Feature Identifiers & Effects Log Page:May Support 00:13:44.512 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.512 Data Area 4 for Telemetry Log: Not Supported 00:13:44.512 Error Log Page Entries Supported: 128 00:13:44.512 Keep Alive: Not Supported 00:13:44.512 00:13:44.512 NVM Command Set Attributes 00:13:44.512 ========================== 00:13:44.512 Submission Queue Entry Size 00:13:44.512 Max: 1 00:13:44.512 Min: 1 00:13:44.512 Completion Queue Entry Size 00:13:44.512 Max: 1 00:13:44.512 Min: 1 00:13:44.512 Number of Namespaces: 0 00:13:44.512 Compare Command: Not Supported 00:13:44.512 Write Uncorrectable Command: Not Supported 00:13:44.512 Dataset Management Command: Not Supported 00:13:44.512 Write Zeroes Command: Not Supported 00:13:44.512 Set Features Save Field: Not Supported 00:13:44.512 Reservations: Not Supported 00:13:44.512 Timestamp: Not Supported 00:13:44.512 Copy: Not Supported 00:13:44.512 Volatile Write Cache: Not Present 00:13:44.512 Atomic Write Unit (Normal): 1 00:13:44.512 Atomic Write Unit (PFail): 1 00:13:44.512 Atomic Compare & Write Unit: 1 00:13:44.512 Fused Compare & Write: Supported 00:13:44.512 Scatter-Gather List 00:13:44.512 SGL Command Set: Supported 00:13:44.512 SGL Keyed: Supported 00:13:44.512 SGL Bit Bucket Descriptor: Not Supported 00:13:44.512 SGL Metadata Pointer: Not Supported 00:13:44.512 Oversized SGL: Not Supported 00:13:44.512 SGL Metadata Address: Not Supported 00:13:44.512 SGL Offset: Supported 00:13:44.512 Transport SGL Data Block: Not Supported 00:13:44.512 Replay Protected Memory Block: Not Supported 00:13:44.512 00:13:44.512 Firmware Slot Information 00:13:44.512 ========================= 00:13:44.512 Active slot: 0 00:13:44.512 00:13:44.512 00:13:44.512 Error Log 00:13:44.512 ========= 00:13:44.512 00:13:44.512 Active Namespaces 00:13:44.512 ================= 00:13:44.512 Discovery Log Page 00:13:44.512 ================== 00:13:44.512 Generation Counter: 2 00:13:44.512 Number of Records: 2 00:13:44.512 Record Format: 0 00:13:44.512 00:13:44.512 Discovery Log Entry 0 00:13:44.512 ---------------------- 00:13:44.512 Transport Type: 3 (TCP) 00:13:44.512 Address Family: 1 (IPv4) 00:13:44.512 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:44.512 Entry Flags: 00:13:44.512 Duplicate Returned Information: 1 00:13:44.512 Explicit Persistent Connection Support for Discovery: 1 00:13:44.512 Transport Requirements: 00:13:44.512 Secure Channel: Not Required 00:13:44.512 Port ID: 0 (0x0000) 00:13:44.512 Controller ID: 65535 (0xffff) 00:13:44.512 Admin Max SQ Size: 128 00:13:44.512 Transport Service Identifier: 4420 00:13:44.512 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:44.512 Transport Address: 10.0.0.2 00:13:44.512 Discovery Log Entry 1 00:13:44.512 ---------------------- 00:13:44.512 Transport Type: 3 (TCP) 00:13:44.512 Address Family: 1 (IPv4) 00:13:44.512 Subsystem Type: 2 (NVM Subsystem) 00:13:44.512 Entry Flags: 00:13:44.512 Duplicate Returned Information: 0 00:13:44.512 Explicit Persistent Connection Support for Discovery: 0 00:13:44.512 Transport Requirements: 00:13:44.512 Secure Channel: Not Required 00:13:44.512 Port ID: 0 (0x0000) 00:13:44.512 Controller ID: 65535 (0xffff) 00:13:44.512 Admin Max SQ Size: 128 00:13:44.512 Transport Service Identifier: 4420 00:13:44.512 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:44.512 Transport Address: 10.0.0.2 [2024-09-29 00:26:00.300230] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:44.512 [2024-09-29 00:26:00.300259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.512 [2024-09-29 00:26:00.300268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.512 [2024-09-29 00:26:00.300275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.512 [2024-09-29 00:26:00.300282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.512 [2024-09-29 00:26:00.300292] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300297] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300301] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.300310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.300349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.300408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.300416] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.300420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.300433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.300450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.300474] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.300545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.300553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.300557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.300567] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:44.512 [2024-09-29 00:26:00.300572] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:44.512 [2024-09-29 00:26:00.300583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.300600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.300619] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.300667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.300679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.300683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.300699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300704] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300708] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.300716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.300734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.300788] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.300795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.300799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.300814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.300831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.300848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.300897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.300904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.300908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.300923] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300928] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.300932] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.300940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.300957] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.301007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.301014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.301018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.301033] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.301050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.301067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.301123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.301134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.301138] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301142] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.301153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301158] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.301170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.301188] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.301238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.301245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.301249] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.301264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.301273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.301281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.301297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.305348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.305366] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.305371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.305375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.512 [2024-09-29 00:26:00.305390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.305396] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.305400] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc11d30) 00:13:44.512 [2024-09-29 00:26:00.305409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.512 [2024-09-29 00:26:00.305434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc70350, cid 3, qid 0 00:13:44.512 [2024-09-29 00:26:00.305492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.512 [2024-09-29 00:26:00.305499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.512 [2024-09-29 00:26:00.305503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.512 [2024-09-29 00:26:00.305508] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc70350) on tqpair=0xc11d30 00:13:44.513 [2024-09-29 00:26:00.305516] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:13:44.513 00:13:44.513 00:26:00 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:44.513 [2024-09-29 00:26:00.342391] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:44.513 [2024-09-29 00:26:00.342674] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68134 ] 00:13:44.774 [2024-09-29 00:26:00.479810] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:44.774 [2024-09-29 00:26:00.479882] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:44.774 [2024-09-29 00:26:00.479889] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:44.774 [2024-09-29 00:26:00.479902] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:44.774 [2024-09-29 00:26:00.479915] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:44.774 [2024-09-29 00:26:00.480033] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:44.774 [2024-09-29 00:26:00.480097] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a5ad30 0 00:13:44.774 [2024-09-29 00:26:00.485434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:44.774 [2024-09-29 00:26:00.485460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:44.774 [2024-09-29 00:26:00.485481] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:44.774 [2024-09-29 00:26:00.485500] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:44.775 [2024-09-29 00:26:00.485543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.485550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.485554] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.485568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:44.775 [2024-09-29 00:26:00.485597] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.493395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.493417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.493437] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.493443] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.493454] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:44.775 [2024-09-29 00:26:00.493463] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:44.775 [2024-09-29 00:26:00.493470] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:44.775 [2024-09-29 00:26:00.493485] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.493491] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.493495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.493504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.493531] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.493592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.493599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.493603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.493607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.493614] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:44.775 [2024-09-29 00:26:00.493622] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:44.775 [2024-09-29 00:26:00.493629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.493634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.493637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.493645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.493662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.494169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.494183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.494188] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494192] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.494200] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:44.775 [2024-09-29 00:26:00.494224] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.775 [2024-09-29 00:26:00.494231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.494247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.494265] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.494319] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.494325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.494329] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494333] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.494340] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.775 [2024-09-29 00:26:00.494378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494388] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.494395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.494415] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.494626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.494633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.494637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494641] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.494647] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:44.775 [2024-09-29 00:26:00.494652] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:44.775 [2024-09-29 00:26:00.494661] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.775 [2024-09-29 00:26:00.494766] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:44.775 [2024-09-29 00:26:00.494771] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.775 [2024-09-29 00:26:00.494780] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494784] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.494788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.494795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.494812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.495279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.495294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.495299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.495303] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.495310] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.775 [2024-09-29 00:26:00.495321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.495326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.495330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.495369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.495390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.495468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.495476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.495480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.495484] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.495490] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.775 [2024-09-29 00:26:00.495495] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:44.775 [2024-09-29 00:26:00.495504] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:44.775 [2024-09-29 00:26:00.495520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.775 [2024-09-29 00:26:00.495531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.495536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.495540] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.775 [2024-09-29 00:26:00.495548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.775 [2024-09-29 00:26:00.495568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.775 [2024-09-29 00:26:00.496197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.775 [2024-09-29 00:26:00.496228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.775 [2024-09-29 00:26:00.496233] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.496246] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=4096, cccid=0 00:13:44.775 [2024-09-29 00:26:00.496252] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab8f30) on tqpair(0x1a5ad30): expected_datao=0, payload_size=4096 00:13:44.775 [2024-09-29 00:26:00.496263] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.496268] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.496278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.775 [2024-09-29 00:26:00.496285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.775 [2024-09-29 00:26:00.496289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.775 [2024-09-29 00:26:00.496294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.775 [2024-09-29 00:26:00.496305] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:44.775 [2024-09-29 00:26:00.496311] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:44.776 [2024-09-29 00:26:00.496316] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:44.776 [2024-09-29 00:26:00.496321] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:44.776 [2024-09-29 00:26:00.496326] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:44.776 [2024-09-29 00:26:00.496343] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.496360] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.496369] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496374] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496378] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.496387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.776 [2024-09-29 00:26:00.496410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.776 [2024-09-29 00:26:00.496891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.776 [2024-09-29 00:26:00.496906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.776 [2024-09-29 00:26:00.496911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab8f30) on tqpair=0x1a5ad30 00:13:44.776 [2024-09-29 00:26:00.496924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496929] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496933] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.496940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.776 [2024-09-29 00:26:00.496946] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496951] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496955] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.496961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.776 [2024-09-29 00:26:00.496967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496975] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.496981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.776 [2024-09-29 00:26:00.496987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.496995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.497001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.776 [2024-09-29 00:26:00.497006] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.497019] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.497027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.497031] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.497035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.497042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.776 [2024-09-29 00:26:00.497062] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab8f30, cid 0, qid 0 00:13:44.776 [2024-09-29 00:26:00.497070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9090, cid 1, qid 0 00:13:44.776 [2024-09-29 00:26:00.497074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab91f0, cid 2, qid 0 00:13:44.776 [2024-09-29 00:26:00.497079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.776 [2024-09-29 00:26:00.497084] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.776 [2024-09-29 00:26:00.501396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.776 [2024-09-29 00:26:00.501415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.776 [2024-09-29 00:26:00.501420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.501424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.776 [2024-09-29 00:26:00.501432] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:44.776 [2024-09-29 00:26:00.501438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.501449] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.501460] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.501469] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.501473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.501477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.501486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:44.776 [2024-09-29 00:26:00.501510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.776 [2024-09-29 00:26:00.501658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.776 [2024-09-29 00:26:00.501665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.776 [2024-09-29 00:26:00.501669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.501673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.776 [2024-09-29 00:26:00.501736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.501748] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.501756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.501761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.501765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.501772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.776 [2024-09-29 00:26:00.501791] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.776 [2024-09-29 00:26:00.502164] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.776 [2024-09-29 00:26:00.502180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.776 [2024-09-29 00:26:00.502185] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502189] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=4096, cccid=4 00:13:44.776 [2024-09-29 00:26:00.502194] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab94b0) on tqpair(0x1a5ad30): expected_datao=0, payload_size=4096 00:13:44.776 [2024-09-29 00:26:00.502203] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502207] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502216] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.776 [2024-09-29 00:26:00.502222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.776 [2024-09-29 00:26:00.502226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.776 [2024-09-29 00:26:00.502247] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:44.776 [2024-09-29 00:26:00.502259] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.502270] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.502278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502286] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.776 [2024-09-29 00:26:00.502294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.776 [2024-09-29 00:26:00.502344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.776 [2024-09-29 00:26:00.502614] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.776 [2024-09-29 00:26:00.502626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.776 [2024-09-29 00:26:00.502630] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502634] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=4096, cccid=4 00:13:44.776 [2024-09-29 00:26:00.502639] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab94b0) on tqpair(0x1a5ad30): expected_datao=0, payload_size=4096 00:13:44.776 [2024-09-29 00:26:00.502647] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502651] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.776 [2024-09-29 00:26:00.502717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.776 [2024-09-29 00:26:00.502721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.776 [2024-09-29 00:26:00.502742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.502754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.776 [2024-09-29 00:26:00.502763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.776 [2024-09-29 00:26:00.502771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.502779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.502800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.777 [2024-09-29 00:26:00.502946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.777 [2024-09-29 00:26:00.502953] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.777 [2024-09-29 00:26:00.502957] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.502961] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=4096, cccid=4 00:13:44.777 [2024-09-29 00:26:00.502966] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab94b0) on tqpair(0x1a5ad30): expected_datao=0, payload_size=4096 00:13:44.777 [2024-09-29 00:26:00.502974] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.502978] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.777 [2024-09-29 00:26:00.503053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.777 [2024-09-29 00:26:00.503057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.777 [2024-09-29 00:26:00.503072] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.777 [2024-09-29 00:26:00.503081] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:44.777 [2024-09-29 00:26:00.503094] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:44.777 [2024-09-29 00:26:00.503102] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.777 [2024-09-29 00:26:00.503107] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:44.777 [2024-09-29 00:26:00.503113] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.777 [2024-09-29 00:26:00.503117] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:44.777 [2024-09-29 00:26:00.503123] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:44.777 [2024-09-29 00:26:00.503141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.503157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.503164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503172] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.503179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.777 [2024-09-29 00:26:00.503201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.777 [2024-09-29 00:26:00.503209] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9610, cid 5, qid 0 00:13:44.777 [2024-09-29 00:26:00.503648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.777 [2024-09-29 00:26:00.503663] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.777 [2024-09-29 00:26:00.503668] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.777 [2024-09-29 00:26:00.503681] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.777 [2024-09-29 00:26:00.503687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.777 [2024-09-29 00:26:00.503691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503695] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9610) on tqpair=0x1a5ad30 00:13:44.777 [2024-09-29 00:26:00.503706] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503711] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503715] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.503723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.503743] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9610, cid 5, qid 0 00:13:44.777 [2024-09-29 00:26:00.503799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.777 [2024-09-29 00:26:00.503806] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.777 [2024-09-29 00:26:00.503810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9610) on tqpair=0x1a5ad30 00:13:44.777 [2024-09-29 00:26:00.503825] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503830] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.503834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.503841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.503867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9610, cid 5, qid 0 00:13:44.777 [2024-09-29 00:26:00.504150] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.777 [2024-09-29 00:26:00.504164] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.777 [2024-09-29 00:26:00.504168] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9610) on tqpair=0x1a5ad30 00:13:44.777 [2024-09-29 00:26:00.504185] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504190] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.504202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.504219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9610, cid 5, qid 0 00:13:44.777 [2024-09-29 00:26:00.504395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.777 [2024-09-29 00:26:00.504414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.777 [2024-09-29 00:26:00.504419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9610) on tqpair=0x1a5ad30 00:13:44.777 [2024-09-29 00:26:00.504441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504446] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.504459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.504467] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.504483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.504491] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504495] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504499] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.504506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.504514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.504523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a5ad30) 00:13:44.777 [2024-09-29 00:26:00.504530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.777 [2024-09-29 00:26:00.504553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9610, cid 5, qid 0 00:13:44.777 [2024-09-29 00:26:00.504560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab94b0, cid 4, qid 0 00:13:44.777 [2024-09-29 00:26:00.504566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9770, cid 6, qid 0 00:13:44.777 [2024-09-29 00:26:00.504571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab98d0, cid 7, qid 0 00:13:44.777 [2024-09-29 00:26:00.505064] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.777 [2024-09-29 00:26:00.505081] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.777 [2024-09-29 00:26:00.505086] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.505090] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=8192, cccid=5 00:13:44.777 [2024-09-29 00:26:00.505096] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9610) on tqpair(0x1a5ad30): expected_datao=0, payload_size=8192 00:13:44.777 [2024-09-29 00:26:00.505114] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.505133] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.505139] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.777 [2024-09-29 00:26:00.505145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.777 [2024-09-29 00:26:00.505149] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.777 [2024-09-29 00:26:00.505152] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=512, cccid=4 00:13:44.778 [2024-09-29 00:26:00.505157] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab94b0) on tqpair(0x1a5ad30): expected_datao=0, payload_size=512 00:13:44.778 [2024-09-29 00:26:00.505164] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505168] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.778 [2024-09-29 00:26:00.505195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.778 [2024-09-29 00:26:00.505198] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505202] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=512, cccid=6 00:13:44.778 [2024-09-29 00:26:00.505207] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab9770) on tqpair(0x1a5ad30): expected_datao=0, payload_size=512 00:13:44.778 [2024-09-29 00:26:00.505214] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505218] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:44.778 [2024-09-29 00:26:00.505229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:44.778 [2024-09-29 00:26:00.505233] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505237] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5ad30): datao=0, datal=4096, cccid=7 00:13:44.778 [2024-09-29 00:26:00.505241] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab98d0) on tqpair(0x1a5ad30): expected_datao=0, payload_size=4096 00:13:44.778 [2024-09-29 00:26:00.505249] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505253] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505258] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.778 [2024-09-29 00:26:00.505264] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.778 [2024-09-29 00:26:00.505268] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9610) on tqpair=0x1a5ad30 00:13:44.778 [2024-09-29 00:26:00.505291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.778 [2024-09-29 00:26:00.505299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.778 [2024-09-29 00:26:00.505302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab94b0) on tqpair=0x1a5ad30 00:13:44.778 [2024-09-29 00:26:00.505318] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.778 [2024-09-29 00:26:00.505324] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.778 [2024-09-29 00:26:00.505328] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505332] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9770) on tqpair=0x1a5ad30 00:13:44.778 [2024-09-29 00:26:00.505340] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.778 [2024-09-29 00:26:00.505346] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.778 [2024-09-29 00:26:00.505350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.778 [2024-09-29 00:26:00.505354] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab98d0) on tqpair=0x1a5ad30 00:13:44.778 ===================================================== 00:13:44.778 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:44.778 ===================================================== 00:13:44.778 Controller Capabilities/Features 00:13:44.778 ================================ 00:13:44.778 Vendor ID: 8086 00:13:44.778 Subsystem Vendor ID: 8086 00:13:44.778 Serial Number: SPDK00000000000001 00:13:44.778 Model Number: SPDK bdev Controller 00:13:44.778 Firmware Version: 24.01.1 00:13:44.778 Recommended Arb Burst: 6 00:13:44.778 IEEE OUI Identifier: e4 d2 5c 00:13:44.778 Multi-path I/O 00:13:44.778 May have multiple subsystem ports: Yes 00:13:44.778 May have multiple controllers: Yes 00:13:44.778 Associated with SR-IOV VF: No 00:13:44.778 Max Data Transfer Size: 131072 00:13:44.778 Max Number of Namespaces: 32 00:13:44.778 Max Number of I/O Queues: 127 00:13:44.778 NVMe Specification Version (VS): 1.3 00:13:44.778 NVMe Specification Version (Identify): 1.3 00:13:44.778 Maximum Queue Entries: 128 00:13:44.778 Contiguous Queues Required: Yes 00:13:44.778 Arbitration Mechanisms Supported 00:13:44.778 Weighted Round Robin: Not Supported 00:13:44.778 Vendor Specific: Not Supported 00:13:44.778 Reset Timeout: 15000 ms 00:13:44.778 Doorbell Stride: 4 bytes 00:13:44.778 NVM Subsystem Reset: Not Supported 00:13:44.778 Command Sets Supported 00:13:44.778 NVM Command Set: Supported 00:13:44.778 Boot Partition: Not Supported 00:13:44.778 Memory Page Size Minimum: 4096 bytes 00:13:44.778 Memory Page Size Maximum: 4096 bytes 00:13:44.778 Persistent Memory Region: Not Supported 00:13:44.778 Optional Asynchronous Events Supported 00:13:44.778 Namespace Attribute Notices: Supported 00:13:44.778 Firmware Activation Notices: Not Supported 00:13:44.778 ANA Change Notices: Not Supported 00:13:44.778 PLE Aggregate Log Change Notices: Not Supported 00:13:44.778 LBA Status Info Alert Notices: Not Supported 00:13:44.778 EGE Aggregate Log Change Notices: Not Supported 00:13:44.778 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.778 Zone Descriptor Change Notices: Not Supported 00:13:44.778 Discovery Log Change Notices: Not Supported 00:13:44.778 Controller Attributes 00:13:44.778 128-bit Host Identifier: Supported 00:13:44.778 Non-Operational Permissive Mode: Not Supported 00:13:44.778 NVM Sets: Not Supported 00:13:44.778 Read Recovery Levels: Not Supported 00:13:44.778 Endurance Groups: Not Supported 00:13:44.778 Predictable Latency Mode: Not Supported 00:13:44.778 Traffic Based Keep ALive: Not Supported 00:13:44.778 Namespace Granularity: Not Supported 00:13:44.778 SQ Associations: Not Supported 00:13:44.778 UUID List: Not Supported 00:13:44.778 Multi-Domain Subsystem: Not Supported 00:13:44.778 Fixed Capacity Management: Not Supported 00:13:44.778 Variable Capacity Management: Not Supported 00:13:44.778 Delete Endurance Group: Not Supported 00:13:44.778 Delete NVM Set: Not Supported 00:13:44.778 Extended LBA Formats Supported: Not Supported 00:13:44.778 Flexible Data Placement Supported: Not Supported 00:13:44.778 00:13:44.778 Controller Memory Buffer Support 00:13:44.778 ================================ 00:13:44.778 Supported: No 00:13:44.778 00:13:44.778 Persistent Memory Region Support 00:13:44.778 ================================ 00:13:44.778 Supported: No 00:13:44.778 00:13:44.778 Admin Command Set Attributes 00:13:44.778 ============================ 00:13:44.778 Security Send/Receive: Not Supported 00:13:44.778 Format NVM: Not Supported 00:13:44.778 Firmware Activate/Download: Not Supported 00:13:44.778 Namespace Management: Not Supported 00:13:44.778 Device Self-Test: Not Supported 00:13:44.778 Directives: Not Supported 00:13:44.778 NVMe-MI: Not Supported 00:13:44.778 Virtualization Management: Not Supported 00:13:44.778 Doorbell Buffer Config: Not Supported 00:13:44.778 Get LBA Status Capability: Not Supported 00:13:44.778 Command & Feature Lockdown Capability: Not Supported 00:13:44.778 Abort Command Limit: 4 00:13:44.778 Async Event Request Limit: 4 00:13:44.778 Number of Firmware Slots: N/A 00:13:44.778 Firmware Slot 1 Read-Only: N/A 00:13:44.778 Firmware Activation Without Reset: N/A 00:13:44.778 Multiple Update Detection Support: N/A 00:13:44.778 Firmware Update Granularity: No Information Provided 00:13:44.778 Per-Namespace SMART Log: No 00:13:44.778 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.778 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:44.778 Command Effects Log Page: Supported 00:13:44.778 Get Log Page Extended Data: Supported 00:13:44.778 Telemetry Log Pages: Not Supported 00:13:44.778 Persistent Event Log Pages: Not Supported 00:13:44.778 Supported Log Pages Log Page: May Support 00:13:44.778 Commands Supported & Effects Log Page: Not Supported 00:13:44.778 Feature Identifiers & Effects Log Page:May Support 00:13:44.778 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.778 Data Area 4 for Telemetry Log: Not Supported 00:13:44.778 Error Log Page Entries Supported: 128 00:13:44.778 Keep Alive: Supported 00:13:44.778 Keep Alive Granularity: 10000 ms 00:13:44.778 00:13:44.778 NVM Command Set Attributes 00:13:44.778 ========================== 00:13:44.778 Submission Queue Entry Size 00:13:44.778 Max: 64 00:13:44.778 Min: 64 00:13:44.778 Completion Queue Entry Size 00:13:44.778 Max: 16 00:13:44.778 Min: 16 00:13:44.778 Number of Namespaces: 32 00:13:44.778 Compare Command: Supported 00:13:44.778 Write Uncorrectable Command: Not Supported 00:13:44.778 Dataset Management Command: Supported 00:13:44.778 Write Zeroes Command: Supported 00:13:44.778 Set Features Save Field: Not Supported 00:13:44.778 Reservations: Supported 00:13:44.778 Timestamp: Not Supported 00:13:44.778 Copy: Supported 00:13:44.778 Volatile Write Cache: Present 00:13:44.778 Atomic Write Unit (Normal): 1 00:13:44.778 Atomic Write Unit (PFail): 1 00:13:44.778 Atomic Compare & Write Unit: 1 00:13:44.778 Fused Compare & Write: Supported 00:13:44.778 Scatter-Gather List 00:13:44.778 SGL Command Set: Supported 00:13:44.778 SGL Keyed: Supported 00:13:44.778 SGL Bit Bucket Descriptor: Not Supported 00:13:44.778 SGL Metadata Pointer: Not Supported 00:13:44.778 Oversized SGL: Not Supported 00:13:44.778 SGL Metadata Address: Not Supported 00:13:44.778 SGL Offset: Supported 00:13:44.778 Transport SGL Data Block: Not Supported 00:13:44.779 Replay Protected Memory Block: Not Supported 00:13:44.779 00:13:44.779 Firmware Slot Information 00:13:44.779 ========================= 00:13:44.779 Active slot: 1 00:13:44.779 Slot 1 Firmware Revision: 24.01.1 00:13:44.779 00:13:44.779 00:13:44.779 Commands Supported and Effects 00:13:44.779 ============================== 00:13:44.779 Admin Commands 00:13:44.779 -------------- 00:13:44.779 Get Log Page (02h): Supported 00:13:44.779 Identify (06h): Supported 00:13:44.779 Abort (08h): Supported 00:13:44.779 Set Features (09h): Supported 00:13:44.779 Get Features (0Ah): Supported 00:13:44.779 Asynchronous Event Request (0Ch): Supported 00:13:44.779 Keep Alive (18h): Supported 00:13:44.779 I/O Commands 00:13:44.779 ------------ 00:13:44.779 Flush (00h): Supported LBA-Change 00:13:44.779 Write (01h): Supported LBA-Change 00:13:44.779 Read (02h): Supported 00:13:44.779 Compare (05h): Supported 00:13:44.779 Write Zeroes (08h): Supported LBA-Change 00:13:44.779 Dataset Management (09h): Supported LBA-Change 00:13:44.779 Copy (19h): Supported LBA-Change 00:13:44.779 Unknown (79h): Supported LBA-Change 00:13:44.779 Unknown (7Ah): Supported 00:13:44.779 00:13:44.779 Error Log 00:13:44.779 ========= 00:13:44.779 00:13:44.779 Arbitration 00:13:44.779 =========== 00:13:44.779 Arbitration Burst: 1 00:13:44.779 00:13:44.779 Power Management 00:13:44.779 ================ 00:13:44.779 Number of Power States: 1 00:13:44.779 Current Power State: Power State #0 00:13:44.779 Power State #0: 00:13:44.779 Max Power: 0.00 W 00:13:44.779 Non-Operational State: Operational 00:13:44.779 Entry Latency: Not Reported 00:13:44.779 Exit Latency: Not Reported 00:13:44.779 Relative Read Throughput: 0 00:13:44.779 Relative Read Latency: 0 00:13:44.779 Relative Write Throughput: 0 00:13:44.779 Relative Write Latency: 0 00:13:44.779 Idle Power: Not Reported 00:13:44.779 Active Power: Not Reported 00:13:44.779 Non-Operational Permissive Mode: Not Supported 00:13:44.779 00:13:44.779 Health Information 00:13:44.779 ================== 00:13:44.779 Critical Warnings: 00:13:44.779 Available Spare Space: OK 00:13:44.779 Temperature: OK 00:13:44.779 Device Reliability: OK 00:13:44.779 Read Only: No 00:13:44.779 Volatile Memory Backup: OK 00:13:44.779 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:44.779 Temperature Threshold: [2024-09-29 00:26:00.509517] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.509527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.509531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a5ad30) 00:13:44.779 [2024-09-29 00:26:00.509540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.779 [2024-09-29 00:26:00.509568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab98d0, cid 7, qid 0 00:13:44.779 [2024-09-29 00:26:00.509643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.779 [2024-09-29 00:26:00.509651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.779 [2024-09-29 00:26:00.509654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.509659] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab98d0) on tqpair=0x1a5ad30 00:13:44.779 [2024-09-29 00:26:00.509695] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:44.779 [2024-09-29 00:26:00.509709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.779 [2024-09-29 00:26:00.509717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.779 [2024-09-29 00:26:00.509724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.779 [2024-09-29 00:26:00.509730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.779 [2024-09-29 00:26:00.509739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.509744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.509748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.779 [2024-09-29 00:26:00.509756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.779 [2024-09-29 00:26:00.509778] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.779 [2024-09-29 00:26:00.510053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.779 [2024-09-29 00:26:00.510068] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.779 [2024-09-29 00:26:00.510073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.779 [2024-09-29 00:26:00.510087] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.779 [2024-09-29 00:26:00.510103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.779 [2024-09-29 00:26:00.510125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.779 [2024-09-29 00:26:00.510452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.779 [2024-09-29 00:26:00.510466] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.779 [2024-09-29 00:26:00.510471] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.779 [2024-09-29 00:26:00.510481] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:44.779 [2024-09-29 00:26:00.510487] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:44.779 [2024-09-29 00:26:00.510498] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510502] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510506] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.779 [2024-09-29 00:26:00.510514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.779 [2024-09-29 00:26:00.510533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.779 [2024-09-29 00:26:00.510592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.779 [2024-09-29 00:26:00.510599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.779 [2024-09-29 00:26:00.510602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.779 [2024-09-29 00:26:00.510618] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.779 [2024-09-29 00:26:00.510634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.779 [2024-09-29 00:26:00.510651] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.779 [2024-09-29 00:26:00.510867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.779 [2024-09-29 00:26:00.510881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.779 [2024-09-29 00:26:00.510886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510890] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.779 [2024-09-29 00:26:00.510902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.779 [2024-09-29 00:26:00.510911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.779 [2024-09-29 00:26:00.510918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.779 [2024-09-29 00:26:00.510936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.779 [2024-09-29 00:26:00.511266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.779 [2024-09-29 00:26:00.511280] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.511285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.511302] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511307] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511311] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.511319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.511346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.511491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.511499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.511504] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511508] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.511519] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511528] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.511536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.511553] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.511778] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.511792] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.511797] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511801] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.511813] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.511821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.511829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.511846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.512051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.512062] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.512067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.512083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512092] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.512099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.512116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.512512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.512529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.512534] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512538] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.512551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.512569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.512604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.512876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.512887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.512892] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512896] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.512908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.512917] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.512924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.512941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.513105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.513117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.513121] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.513125] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.513137] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.513142] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.513146] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.513153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.513170] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.513232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.513239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.513243] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.513247] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.513258] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.513263] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.513267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.513274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.513291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.516388] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.516408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.516413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.516418] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.516434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.516439] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.516443] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5ad30) 00:13:44.780 [2024-09-29 00:26:00.516452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:44.780 [2024-09-29 00:26:00.516477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab9350, cid 3, qid 0 00:13:44.780 [2024-09-29 00:26:00.516626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:44.780 [2024-09-29 00:26:00.516640] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:44.780 [2024-09-29 00:26:00.516645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:44.780 [2024-09-29 00:26:00.516664] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1ab9350) on tqpair=0x1a5ad30 00:13:44.780 [2024-09-29 00:26:00.516674] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:13:44.780 0 Kelvin (-273 Celsius) 00:13:44.780 Available Spare: 0% 00:13:44.780 Available Spare Threshold: 0% 00:13:44.780 Life Percentage Used: 0% 00:13:44.780 Data Units Read: 0 00:13:44.780 Data Units Written: 0 00:13:44.780 Host Read Commands: 0 00:13:44.780 Host Write Commands: 0 00:13:44.780 Controller Busy Time: 0 minutes 00:13:44.780 Power Cycles: 0 00:13:44.780 Power On Hours: 0 hours 00:13:44.780 Unsafe Shutdowns: 0 00:13:44.780 Unrecoverable Media Errors: 0 00:13:44.780 Lifetime Error Log Entries: 0 00:13:44.780 Warning Temperature Time: 0 minutes 00:13:44.780 Critical Temperature Time: 0 minutes 00:13:44.780 00:13:44.780 Number of Queues 00:13:44.780 ================ 00:13:44.780 Number of I/O Submission Queues: 127 00:13:44.780 Number of I/O Completion Queues: 127 00:13:44.780 00:13:44.780 Active Namespaces 00:13:44.780 ================= 00:13:44.780 Namespace ID:1 00:13:44.780 Error Recovery Timeout: Unlimited 00:13:44.780 Command Set Identifier: NVM (00h) 00:13:44.780 Deallocate: Supported 00:13:44.780 Deallocated/Unwritten Error: Not Supported 00:13:44.780 Deallocated Read Value: Unknown 00:13:44.780 Deallocate in Write Zeroes: Not Supported 00:13:44.780 Deallocated Guard Field: 0xFFFF 00:13:44.780 Flush: Supported 00:13:44.780 Reservation: Supported 00:13:44.780 Namespace Sharing Capabilities: Multiple Controllers 00:13:44.780 Size (in LBAs): 131072 (0GiB) 00:13:44.780 Capacity (in LBAs): 131072 (0GiB) 00:13:44.780 Utilization (in LBAs): 131072 (0GiB) 00:13:44.780 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:44.780 EUI64: ABCDEF0123456789 00:13:44.780 UUID: 4ee90241-7d43-4f40-88d6-bf7a45173e4d 00:13:44.780 Thin Provisioning: Not Supported 00:13:44.780 Per-NS Atomic Units: Yes 00:13:44.781 Atomic Boundary Size (Normal): 0 00:13:44.781 Atomic Boundary Size (PFail): 0 00:13:44.781 Atomic Boundary Offset: 0 00:13:44.781 Maximum Single Source Range Length: 65535 00:13:44.781 Maximum Copy Length: 65535 00:13:44.781 Maximum Source Range Count: 1 00:13:44.781 NGUID/EUI64 Never Reused: No 00:13:44.781 Namespace Write Protected: No 00:13:44.781 Number of LBA Formats: 1 00:13:44.781 Current LBA Format: LBA Format #00 00:13:44.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:44.781 00:13:44.781 00:26:00 -- host/identify.sh@51 -- # sync 00:13:44.781 00:26:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.781 00:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.781 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.781 00:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.781 00:26:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:44.781 00:26:00 -- host/identify.sh@56 -- # nvmftestfini 00:13:44.781 00:26:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:44.781 00:26:00 -- nvmf/common.sh@116 -- # sync 00:13:44.781 00:26:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:44.781 00:26:00 -- nvmf/common.sh@119 -- # set +e 00:13:44.781 00:26:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:44.781 00:26:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:44.781 rmmod nvme_tcp 00:13:44.781 rmmod nvme_fabrics 00:13:45.040 rmmod nvme_keyring 00:13:45.040 00:26:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.040 00:26:00 -- nvmf/common.sh@123 -- # set -e 00:13:45.040 00:26:00 -- nvmf/common.sh@124 -- # return 0 00:13:45.040 00:26:00 -- nvmf/common.sh@477 -- # '[' -n 68093 ']' 00:13:45.040 00:26:00 -- nvmf/common.sh@478 -- # killprocess 68093 00:13:45.040 00:26:00 -- common/autotest_common.sh@926 -- # '[' -z 68093 ']' 00:13:45.040 00:26:00 -- common/autotest_common.sh@930 -- # kill -0 68093 00:13:45.040 00:26:00 -- common/autotest_common.sh@931 -- # uname 00:13:45.040 00:26:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:45.040 00:26:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68093 00:13:45.040 00:26:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:45.040 killing process with pid 68093 00:13:45.040 00:26:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:45.040 00:26:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68093' 00:13:45.040 00:26:00 -- common/autotest_common.sh@945 -- # kill 68093 00:13:45.040 [2024-09-29 00:26:00.687233] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:45.040 00:26:00 -- common/autotest_common.sh@950 -- # wait 68093 00:13:45.040 00:26:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.040 00:26:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.040 00:26:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.040 00:26:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.040 00:26:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.040 00:26:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.040 00:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.040 00:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.300 00:26:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:45.300 ************************************ 00:13:45.300 END TEST nvmf_identify 00:13:45.300 ************************************ 00:13:45.300 00:13:45.300 real 0m2.469s 00:13:45.300 user 0m7.200s 00:13:45.300 sys 0m0.576s 00:13:45.300 00:26:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.300 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:45.300 00:26:00 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:45.300 00:26:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:45.300 00:26:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.300 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:13:45.300 ************************************ 00:13:45.300 START TEST nvmf_perf 00:13:45.300 ************************************ 00:13:45.300 00:26:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:45.300 * Looking for test storage... 00:13:45.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:45.300 00:26:01 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.300 00:26:01 -- nvmf/common.sh@7 -- # uname -s 00:13:45.300 00:26:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.300 00:26:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.300 00:26:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.300 00:26:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.300 00:26:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.300 00:26:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.300 00:26:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.300 00:26:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.300 00:26:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.300 00:26:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.300 00:26:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:13:45.300 00:26:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:13:45.300 00:26:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.300 00:26:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.300 00:26:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.300 00:26:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.300 00:26:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.300 00:26:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.300 00:26:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.300 00:26:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 00:26:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 00:26:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 00:26:01 -- paths/export.sh@5 -- # export PATH 00:13:45.300 00:26:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.300 00:26:01 -- nvmf/common.sh@46 -- # : 0 00:13:45.300 00:26:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:45.300 00:26:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:45.300 00:26:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:45.300 00:26:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.300 00:26:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.300 00:26:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:45.300 00:26:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:45.300 00:26:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:45.300 00:26:01 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:45.300 00:26:01 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:45.300 00:26:01 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:45.300 00:26:01 -- host/perf.sh@17 -- # nvmftestinit 00:13:45.300 00:26:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:45.300 00:26:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.300 00:26:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:45.300 00:26:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:45.300 00:26:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:45.300 00:26:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.300 00:26:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.300 00:26:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.300 00:26:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:45.300 00:26:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:45.300 00:26:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:45.300 00:26:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:45.300 00:26:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:45.300 00:26:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:45.300 00:26:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.300 00:26:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.300 00:26:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.300 00:26:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:45.300 00:26:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.300 00:26:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.300 00:26:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.300 00:26:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.300 00:26:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.300 00:26:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.300 00:26:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.300 00:26:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.300 00:26:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:45.300 00:26:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:45.300 Cannot find device "nvmf_tgt_br" 00:13:45.300 00:26:01 -- nvmf/common.sh@154 -- # true 00:13:45.300 00:26:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.300 Cannot find device "nvmf_tgt_br2" 00:13:45.300 00:26:01 -- nvmf/common.sh@155 -- # true 00:13:45.300 00:26:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:45.300 00:26:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:45.300 Cannot find device "nvmf_tgt_br" 00:13:45.300 00:26:01 -- nvmf/common.sh@157 -- # true 00:13:45.300 00:26:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:45.300 Cannot find device "nvmf_tgt_br2" 00:13:45.300 00:26:01 -- nvmf/common.sh@158 -- # true 00:13:45.300 00:26:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:45.560 00:26:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:45.560 00:26:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.560 00:26:01 -- nvmf/common.sh@161 -- # true 00:13:45.560 00:26:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.560 00:26:01 -- nvmf/common.sh@162 -- # true 00:13:45.560 00:26:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.560 00:26:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.560 00:26:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.560 00:26:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.560 00:26:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.560 00:26:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.560 00:26:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.560 00:26:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.560 00:26:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.560 00:26:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:45.560 00:26:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:45.560 00:26:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:45.560 00:26:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:45.560 00:26:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.560 00:26:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.560 00:26:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.560 00:26:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:45.560 00:26:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:45.560 00:26:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.560 00:26:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.560 00:26:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.560 00:26:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.560 00:26:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.560 00:26:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:45.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:13:45.560 00:13:45.560 --- 10.0.0.2 ping statistics --- 00:13:45.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.560 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:45.560 00:26:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:45.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:45.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:45.560 00:13:45.560 --- 10.0.0.3 ping statistics --- 00:13:45.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.560 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:45.560 00:26:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:45.560 00:13:45.560 --- 10.0.0.1 ping statistics --- 00:13:45.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.560 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:45.560 00:26:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.560 00:26:01 -- nvmf/common.sh@421 -- # return 0 00:13:45.560 00:26:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:45.560 00:26:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.560 00:26:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:45.560 00:26:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:45.560 00:26:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.560 00:26:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:45.560 00:26:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:45.819 00:26:01 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:45.819 00:26:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:45.819 00:26:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:45.819 00:26:01 -- common/autotest_common.sh@10 -- # set +x 00:13:45.819 00:26:01 -- nvmf/common.sh@469 -- # nvmfpid=68298 00:13:45.819 00:26:01 -- nvmf/common.sh@470 -- # waitforlisten 68298 00:13:45.819 00:26:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.819 00:26:01 -- common/autotest_common.sh@819 -- # '[' -z 68298 ']' 00:13:45.819 00:26:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.819 00:26:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.819 00:26:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.819 00:26:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.819 00:26:01 -- common/autotest_common.sh@10 -- # set +x 00:13:45.819 [2024-09-29 00:26:01.479858] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:13:45.819 [2024-09-29 00:26:01.479959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.819 [2024-09-29 00:26:01.619213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.078 [2024-09-29 00:26:01.674733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.078 [2024-09-29 00:26:01.675079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.078 [2024-09-29 00:26:01.675268] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.078 [2024-09-29 00:26:01.675421] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.078 [2024-09-29 00:26:01.675637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.078 [2024-09-29 00:26:01.676297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.078 [2024-09-29 00:26:01.676400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.078 [2024-09-29 00:26:01.676406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.646 00:26:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.646 00:26:02 -- common/autotest_common.sh@852 -- # return 0 00:13:46.646 00:26:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:46.646 00:26:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:46.646 00:26:02 -- common/autotest_common.sh@10 -- # set +x 00:13:46.904 00:26:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.904 00:26:02 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:46.904 00:26:02 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:47.162 00:26:02 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:47.162 00:26:02 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:47.421 00:26:03 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:47.421 00:26:03 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:47.680 00:26:03 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:47.680 00:26:03 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:47.680 00:26:03 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:47.680 00:26:03 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:47.680 00:26:03 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:47.938 [2024-09-29 00:26:03.667814] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.938 00:26:03 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:48.198 00:26:03 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:48.198 00:26:03 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.457 00:26:04 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:48.457 00:26:04 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:48.715 00:26:04 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.975 [2024-09-29 00:26:04.593094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.975 00:26:04 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.234 00:26:04 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:49.234 00:26:04 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:49.234 00:26:04 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:49.234 00:26:04 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:50.171 Initializing NVMe Controllers 00:13:50.171 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:50.171 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:50.171 Initialization complete. Launching workers. 00:13:50.171 ======================================================== 00:13:50.171 Latency(us) 00:13:50.171 Device Information : IOPS MiB/s Average min max 00:13:50.171 PCIE (0000:00:06.0) NSID 1 from core 0: 22956.94 89.68 1393.71 382.71 7571.17 00:13:50.171 ======================================================== 00:13:50.171 Total : 22956.94 89.68 1393.71 382.71 7571.17 00:13:50.171 00:13:50.171 00:26:05 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:51.548 Initializing NVMe Controllers 00:13:51.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:51.548 Initialization complete. Launching workers. 00:13:51.548 ======================================================== 00:13:51.548 Latency(us) 00:13:51.548 Device Information : IOPS MiB/s Average min max 00:13:51.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3630.99 14.18 275.08 97.40 4252.36 00:13:51.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8127.98 5942.87 12035.98 00:13:51.548 ======================================================== 00:13:51.548 Total : 3754.98 14.67 534.41 97.40 12035.98 00:13:51.548 00:13:51.548 00:26:07 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:52.925 Initializing NVMe Controllers 00:13:52.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:52.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:52.925 Initialization complete. Launching workers. 00:13:52.925 ======================================================== 00:13:52.925 Latency(us) 00:13:52.925 Device Information : IOPS MiB/s Average min max 00:13:52.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8968.63 35.03 3568.06 477.97 10310.44 00:13:52.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3928.09 15.34 8194.85 5335.92 12642.46 00:13:52.925 ======================================================== 00:13:52.925 Total : 12896.72 50.38 4977.29 477.97 12642.46 00:13:52.925 00:13:52.925 00:26:08 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:52.925 00:26:08 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:55.468 Initializing NVMe Controllers 00:13:55.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.468 Controller IO queue size 128, less than required. 00:13:55.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.468 Controller IO queue size 128, less than required. 00:13:55.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:55.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:55.468 Initialization complete. Launching workers. 00:13:55.468 ======================================================== 00:13:55.468 Latency(us) 00:13:55.468 Device Information : IOPS MiB/s Average min max 00:13:55.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1913.49 478.37 67659.32 34492.34 128242.24 00:13:55.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 665.00 166.25 204396.00 100219.02 349760.29 00:13:55.468 ======================================================== 00:13:55.468 Total : 2578.48 644.62 102923.97 34492.34 349760.29 00:13:55.468 00:13:55.468 00:26:11 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:55.727 No valid NVMe controllers or AIO or URING devices found 00:13:55.727 Initializing NVMe Controllers 00:13:55.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.727 Controller IO queue size 128, less than required. 00:13:55.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.727 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:55.727 Controller IO queue size 128, less than required. 00:13:55.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.727 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:55.727 WARNING: Some requested NVMe devices were skipped 00:13:55.727 00:26:11 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:58.283 Initializing NVMe Controllers 00:13:58.283 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.283 Controller IO queue size 128, less than required. 00:13:58.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.283 Controller IO queue size 128, less than required. 00:13:58.283 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.283 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.283 Initialization complete. Launching workers. 00:13:58.283 00:13:58.283 ==================== 00:13:58.283 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:58.283 TCP transport: 00:13:58.283 polls: 7294 00:13:58.283 idle_polls: 0 00:13:58.283 sock_completions: 7294 00:13:58.283 nvme_completions: 6617 00:13:58.283 submitted_requests: 10067 00:13:58.283 queued_requests: 1 00:13:58.283 00:13:58.283 ==================== 00:13:58.283 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:58.283 TCP transport: 00:13:58.283 polls: 7779 00:13:58.283 idle_polls: 0 00:13:58.283 sock_completions: 7779 00:13:58.283 nvme_completions: 6614 00:13:58.283 submitted_requests: 10128 00:13:58.283 queued_requests: 1 00:13:58.283 ======================================================== 00:13:58.283 Latency(us) 00:13:58.283 Device Information : IOPS MiB/s Average min max 00:13:58.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1715.94 428.99 75604.97 41761.74 124326.45 00:13:58.283 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1714.94 428.74 75187.15 32615.79 124706.37 00:13:58.283 ======================================================== 00:13:58.283 Total : 3430.88 857.72 75396.12 32615.79 124706.37 00:13:58.283 00:13:58.283 00:26:13 -- host/perf.sh@66 -- # sync 00:13:58.283 00:26:14 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.542 00:26:14 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:13:58.542 00:26:14 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:13:58.542 00:26:14 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:13:58.800 00:26:14 -- host/perf.sh@72 -- # ls_guid=aa3f8352-8700-4e4d-b1c7-034fcd1796be 00:13:58.800 00:26:14 -- host/perf.sh@73 -- # get_lvs_free_mb aa3f8352-8700-4e4d-b1c7-034fcd1796be 00:13:58.800 00:26:14 -- common/autotest_common.sh@1343 -- # local lvs_uuid=aa3f8352-8700-4e4d-b1c7-034fcd1796be 00:13:58.800 00:26:14 -- common/autotest_common.sh@1344 -- # local lvs_info 00:13:58.800 00:26:14 -- common/autotest_common.sh@1345 -- # local fc 00:13:58.800 00:26:14 -- common/autotest_common.sh@1346 -- # local cs 00:13:58.800 00:26:14 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:59.058 00:26:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:13:59.058 { 00:13:59.058 "uuid": "aa3f8352-8700-4e4d-b1c7-034fcd1796be", 00:13:59.058 "name": "lvs_0", 00:13:59.058 "base_bdev": "Nvme0n1", 00:13:59.058 "total_data_clusters": 1278, 00:13:59.058 "free_clusters": 1278, 00:13:59.058 "block_size": 4096, 00:13:59.058 "cluster_size": 4194304 00:13:59.058 } 00:13:59.058 ]' 00:13:59.058 00:26:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="aa3f8352-8700-4e4d-b1c7-034fcd1796be") .free_clusters' 00:13:59.058 00:26:14 -- common/autotest_common.sh@1348 -- # fc=1278 00:13:59.058 00:26:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="aa3f8352-8700-4e4d-b1c7-034fcd1796be") .cluster_size' 00:13:59.058 00:26:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:13:59.058 00:26:14 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:13:59.058 00:26:14 -- common/autotest_common.sh@1353 -- # echo 5112 00:13:59.058 5112 00:13:59.058 00:26:14 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:13:59.059 00:26:14 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u aa3f8352-8700-4e4d-b1c7-034fcd1796be lbd_0 5112 00:13:59.317 00:26:15 -- host/perf.sh@80 -- # lb_guid=2acaa66e-654d-4068-85e8-f0f94ff01b0a 00:13:59.317 00:26:15 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2acaa66e-654d-4068-85e8-f0f94ff01b0a lvs_n_0 00:13:59.884 00:26:15 -- host/perf.sh@83 -- # ls_nested_guid=ffe56d7f-a10d-4ba7-9054-c6e862fc9180 00:13:59.884 00:26:15 -- host/perf.sh@84 -- # get_lvs_free_mb ffe56d7f-a10d-4ba7-9054-c6e862fc9180 00:13:59.884 00:26:15 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ffe56d7f-a10d-4ba7-9054-c6e862fc9180 00:13:59.884 00:26:15 -- common/autotest_common.sh@1344 -- # local lvs_info 00:13:59.884 00:26:15 -- common/autotest_common.sh@1345 -- # local fc 00:13:59.884 00:26:15 -- common/autotest_common.sh@1346 -- # local cs 00:13:59.884 00:26:15 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:00.143 00:26:15 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:00.143 { 00:14:00.143 "uuid": "aa3f8352-8700-4e4d-b1c7-034fcd1796be", 00:14:00.143 "name": "lvs_0", 00:14:00.143 "base_bdev": "Nvme0n1", 00:14:00.143 "total_data_clusters": 1278, 00:14:00.143 "free_clusters": 0, 00:14:00.143 "block_size": 4096, 00:14:00.143 "cluster_size": 4194304 00:14:00.143 }, 00:14:00.143 { 00:14:00.143 "uuid": "ffe56d7f-a10d-4ba7-9054-c6e862fc9180", 00:14:00.143 "name": "lvs_n_0", 00:14:00.143 "base_bdev": "2acaa66e-654d-4068-85e8-f0f94ff01b0a", 00:14:00.143 "total_data_clusters": 1276, 00:14:00.143 "free_clusters": 1276, 00:14:00.143 "block_size": 4096, 00:14:00.143 "cluster_size": 4194304 00:14:00.143 } 00:14:00.143 ]' 00:14:00.143 00:26:15 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ffe56d7f-a10d-4ba7-9054-c6e862fc9180") .free_clusters' 00:14:00.143 00:26:15 -- common/autotest_common.sh@1348 -- # fc=1276 00:14:00.143 00:26:15 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ffe56d7f-a10d-4ba7-9054-c6e862fc9180") .cluster_size' 00:14:00.143 00:26:15 -- common/autotest_common.sh@1349 -- # cs=4194304 00:14:00.143 00:26:15 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:14:00.143 5104 00:14:00.143 00:26:15 -- common/autotest_common.sh@1353 -- # echo 5104 00:14:00.143 00:26:15 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:00.143 00:26:15 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ffe56d7f-a10d-4ba7-9054-c6e862fc9180 lbd_nest_0 5104 00:14:00.401 00:26:16 -- host/perf.sh@88 -- # lb_nested_guid=3f13ef09-9dcf-44d5-b451-381d54127964 00:14:00.401 00:26:16 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:00.659 00:26:16 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:00.659 00:26:16 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3f13ef09-9dcf-44d5-b451-381d54127964 00:14:00.918 00:26:16 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.176 00:26:16 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:01.176 00:26:16 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:01.176 00:26:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:01.176 00:26:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:01.176 00:26:16 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:01.434 No valid NVMe controllers or AIO or URING devices found 00:14:01.434 Initializing NVMe Controllers 00:14:01.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.434 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:01.434 WARNING: Some requested NVMe devices were skipped 00:14:01.434 00:26:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:01.434 00:26:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:13.647 Initializing NVMe Controllers 00:14:13.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:13.647 Initialization complete. Launching workers. 00:14:13.647 ======================================================== 00:14:13.647 Latency(us) 00:14:13.647 Device Information : IOPS MiB/s Average min max 00:14:13.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.60 121.32 1030.33 330.21 8387.61 00:14:13.647 ======================================================== 00:14:13.647 Total : 970.60 121.32 1030.33 330.21 8387.61 00:14:13.647 00:14:13.647 00:26:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:13.647 00:26:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:13.647 00:26:27 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:13.647 No valid NVMe controllers or AIO or URING devices found 00:14:13.647 Initializing NVMe Controllers 00:14:13.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:13.647 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:13.647 WARNING: Some requested NVMe devices were skipped 00:14:13.647 00:26:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:13.647 00:26:27 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:23.617 Initializing NVMe Controllers 00:14:23.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:23.617 Initialization complete. Launching workers. 00:14:23.617 ======================================================== 00:14:23.617 Latency(us) 00:14:23.617 Device Information : IOPS MiB/s Average min max 00:14:23.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1345.31 168.16 23799.52 6374.52 75607.92 00:14:23.617 ======================================================== 00:14:23.617 Total : 1345.31 168.16 23799.52 6374.52 75607.92 00:14:23.617 00:14:23.617 00:26:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:23.617 00:26:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:23.617 00:26:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:23.617 No valid NVMe controllers or AIO or URING devices found 00:14:23.617 Initializing NVMe Controllers 00:14:23.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.617 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:23.617 WARNING: Some requested NVMe devices were skipped 00:14:23.617 00:26:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:23.617 00:26:38 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:33.591 Initializing NVMe Controllers 00:14:33.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.591 Controller IO queue size 128, less than required. 00:14:33.591 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:33.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.591 Initialization complete. Launching workers. 00:14:33.591 ======================================================== 00:14:33.591 Latency(us) 00:14:33.591 Device Information : IOPS MiB/s Average min max 00:14:33.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3989.60 498.70 32144.42 10550.85 65475.15 00:14:33.591 ======================================================== 00:14:33.591 Total : 3989.60 498.70 32144.42 10550.85 65475.15 00:14:33.591 00:14:33.591 00:26:48 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.591 00:26:49 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3f13ef09-9dcf-44d5-b451-381d54127964 00:14:33.591 00:26:49 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:33.848 00:26:49 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2acaa66e-654d-4068-85e8-f0f94ff01b0a 00:14:34.106 00:26:49 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:34.365 00:26:50 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:34.365 00:26:50 -- host/perf.sh@114 -- # nvmftestfini 00:14:34.365 00:26:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:34.365 00:26:50 -- nvmf/common.sh@116 -- # sync 00:14:34.365 00:26:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:34.365 00:26:50 -- nvmf/common.sh@119 -- # set +e 00:14:34.365 00:26:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:34.365 00:26:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:34.365 rmmod nvme_tcp 00:14:34.365 rmmod nvme_fabrics 00:14:34.636 rmmod nvme_keyring 00:14:34.636 00:26:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:34.636 00:26:50 -- nvmf/common.sh@123 -- # set -e 00:14:34.636 00:26:50 -- nvmf/common.sh@124 -- # return 0 00:14:34.636 00:26:50 -- nvmf/common.sh@477 -- # '[' -n 68298 ']' 00:14:34.636 00:26:50 -- nvmf/common.sh@478 -- # killprocess 68298 00:14:34.636 00:26:50 -- common/autotest_common.sh@926 -- # '[' -z 68298 ']' 00:14:34.636 00:26:50 -- common/autotest_common.sh@930 -- # kill -0 68298 00:14:34.636 00:26:50 -- common/autotest_common.sh@931 -- # uname 00:14:34.636 00:26:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.636 00:26:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68298 00:14:34.636 killing process with pid 68298 00:14:34.636 00:26:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:34.636 00:26:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:34.636 00:26:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68298' 00:14:34.636 00:26:50 -- common/autotest_common.sh@945 -- # kill 68298 00:14:34.636 00:26:50 -- common/autotest_common.sh@950 -- # wait 68298 00:14:36.026 00:26:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:36.026 00:26:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:36.026 00:26:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:36.026 00:26:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.026 00:26:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:36.026 00:26:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.026 00:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.026 00:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.026 00:26:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:36.026 ************************************ 00:14:36.026 END TEST nvmf_perf 00:14:36.026 ************************************ 00:14:36.026 00:14:36.026 real 0m50.719s 00:14:36.026 user 3m10.254s 00:14:36.026 sys 0m12.755s 00:14:36.026 00:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.026 00:26:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.026 00:26:51 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:36.026 00:26:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:36.026 00:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:36.026 00:26:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.026 ************************************ 00:14:36.026 START TEST nvmf_fio_host 00:14:36.026 ************************************ 00:14:36.026 00:26:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:36.026 * Looking for test storage... 00:14:36.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:36.026 00:26:51 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.026 00:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.026 00:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.026 00:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.026 00:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.026 00:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.026 00:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.026 00:26:51 -- paths/export.sh@5 -- # export PATH 00:14:36.027 00:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.027 00:26:51 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:36.027 00:26:51 -- nvmf/common.sh@7 -- # uname -s 00:14:36.027 00:26:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.027 00:26:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.027 00:26:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.027 00:26:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.027 00:26:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.027 00:26:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.027 00:26:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.027 00:26:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.027 00:26:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.027 00:26:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.027 00:26:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:14:36.027 00:26:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:14:36.027 00:26:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.027 00:26:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.027 00:26:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:36.027 00:26:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.027 00:26:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.027 00:26:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.027 00:26:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.027 00:26:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.027 00:26:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.027 00:26:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.027 00:26:51 -- paths/export.sh@5 -- # export PATH 00:14:36.027 00:26:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.027 00:26:51 -- nvmf/common.sh@46 -- # : 0 00:14:36.027 00:26:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:36.027 00:26:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:36.027 00:26:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:36.027 00:26:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.027 00:26:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.027 00:26:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:36.027 00:26:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:36.027 00:26:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:36.027 00:26:51 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.027 00:26:51 -- host/fio.sh@14 -- # nvmftestinit 00:14:36.027 00:26:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:36.027 00:26:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.027 00:26:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:36.027 00:26:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:36.027 00:26:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:36.027 00:26:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.027 00:26:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.027 00:26:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.027 00:26:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:36.027 00:26:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:36.027 00:26:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:36.027 00:26:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:36.027 00:26:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:36.027 00:26:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:36.027 00:26:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.027 00:26:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.027 00:26:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:36.027 00:26:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:36.027 00:26:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:36.027 00:26:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:36.027 00:26:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:36.027 00:26:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.027 00:26:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:36.027 00:26:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:36.027 00:26:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:36.027 00:26:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:36.027 00:26:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:36.027 00:26:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:36.027 Cannot find device "nvmf_tgt_br" 00:14:36.027 00:26:51 -- nvmf/common.sh@154 -- # true 00:14:36.027 00:26:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.285 Cannot find device "nvmf_tgt_br2" 00:14:36.285 00:26:51 -- nvmf/common.sh@155 -- # true 00:14:36.285 00:26:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:36.285 00:26:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:36.285 Cannot find device "nvmf_tgt_br" 00:14:36.285 00:26:51 -- nvmf/common.sh@157 -- # true 00:14:36.285 00:26:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:36.285 Cannot find device "nvmf_tgt_br2" 00:14:36.285 00:26:51 -- nvmf/common.sh@158 -- # true 00:14:36.285 00:26:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:36.285 00:26:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:36.285 00:26:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.285 00:26:51 -- nvmf/common.sh@161 -- # true 00:14:36.285 00:26:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.285 00:26:51 -- nvmf/common.sh@162 -- # true 00:14:36.285 00:26:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.285 00:26:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.285 00:26:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.285 00:26:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.285 00:26:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.285 00:26:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.285 00:26:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.285 00:26:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:36.285 00:26:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:36.285 00:26:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:36.285 00:26:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:36.285 00:26:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:36.286 00:26:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:36.286 00:26:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.286 00:26:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.286 00:26:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.286 00:26:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:36.286 00:26:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:36.286 00:26:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.544 00:26:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.544 00:26:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.544 00:26:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.544 00:26:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.544 00:26:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:36.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:36.544 00:14:36.544 --- 10.0.0.2 ping statistics --- 00:14:36.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.544 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:36.544 00:26:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:36.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:36.544 00:14:36.544 --- 10.0.0.3 ping statistics --- 00:14:36.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.544 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:36.544 00:26:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:36.544 00:14:36.544 --- 10.0.0.1 ping statistics --- 00:14:36.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.544 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:36.544 00:26:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.544 00:26:52 -- nvmf/common.sh@421 -- # return 0 00:14:36.544 00:26:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:36.544 00:26:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.544 00:26:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:36.544 00:26:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:36.544 00:26:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.544 00:26:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:36.544 00:26:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:36.544 00:26:52 -- host/fio.sh@16 -- # [[ y != y ]] 00:14:36.544 00:26:52 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:36.544 00:26:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:36.544 00:26:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.544 00:26:52 -- host/fio.sh@24 -- # nvmfpid=69120 00:14:36.544 00:26:52 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.544 00:26:52 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.544 00:26:52 -- host/fio.sh@28 -- # waitforlisten 69120 00:14:36.544 00:26:52 -- common/autotest_common.sh@819 -- # '[' -z 69120 ']' 00:14:36.544 00:26:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.544 00:26:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:36.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.544 00:26:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.544 00:26:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:36.544 00:26:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.544 [2024-09-29 00:26:52.280160] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:36.544 [2024-09-29 00:26:52.280274] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.802 [2024-09-29 00:26:52.422397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.802 [2024-09-29 00:26:52.495781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:36.802 [2024-09-29 00:26:52.495956] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.802 [2024-09-29 00:26:52.495971] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.802 [2024-09-29 00:26:52.495982] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.802 [2024-09-29 00:26:52.496114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.802 [2024-09-29 00:26:52.496804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.802 [2024-09-29 00:26:52.497009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.802 [2024-09-29 00:26:52.497018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.737 00:26:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:37.737 00:26:53 -- common/autotest_common.sh@852 -- # return 0 00:14:37.737 00:26:53 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:37.737 [2024-09-29 00:26:53.558414] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.995 00:26:53 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:37.995 00:26:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:37.995 00:26:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.995 00:26:53 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:38.253 Malloc1 00:14:38.253 00:26:53 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:38.511 00:26:54 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.511 00:26:54 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.076 [2024-09-29 00:26:54.628743] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.076 00:26:54 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.076 00:26:54 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:39.076 00:26:54 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:39.076 00:26:54 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:39.076 00:26:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:39.076 00:26:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:39.076 00:26:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:39.076 00:26:54 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:39.076 00:26:54 -- common/autotest_common.sh@1320 -- # shift 00:14:39.076 00:26:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:39.076 00:26:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.076 00:26:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:39.076 00:26:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:39.076 00:26:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:39.333 00:26:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:39.333 00:26:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:39.333 00:26:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:39.333 00:26:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:39.333 00:26:54 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:39.333 00:26:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:39.333 00:26:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:39.333 00:26:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:39.333 00:26:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:39.333 00:26:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:39.333 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:39.333 fio-3.35 00:14:39.333 Starting 1 thread 00:14:41.868 00:14:41.868 test: (groupid=0, jobs=1): err= 0: pid=69204: Sun Sep 29 00:26:57 2024 00:14:41.868 read: IOPS=9445, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2006msec) 00:14:41.868 slat (nsec): min=1828, max=1644.2k, avg=2542.76, stdev=12258.21 00:14:41.868 clat (usec): min=2086, max=12296, avg=7053.61, stdev=504.78 00:14:41.868 lat (usec): min=2117, max=12298, avg=7056.15, stdev=504.83 00:14:41.868 clat percentiles (usec): 00:14:41.868 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:14:41.868 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:14:41.868 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7832], 00:14:41.868 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9765], 99.95th=[10290], 00:14:41.868 | 99.99th=[12256] 00:14:41.868 bw ( KiB/s): min=36533, max=39040, per=99.86%, avg=37729.25, stdev=1036.56, samples=4 00:14:41.868 iops : min= 9133, max= 9760, avg=9432.25, stdev=259.24, samples=4 00:14:41.868 write: IOPS=9443, BW=36.9MiB/s (38.7MB/s)(74.0MiB/2006msec); 0 zone resets 00:14:41.868 slat (nsec): min=1873, max=187197, avg=2528.89, stdev=2080.34 00:14:41.868 clat (usec): min=1978, max=11668, avg=6444.91, stdev=480.60 00:14:41.868 lat (usec): min=1989, max=11670, avg=6447.44, stdev=480.50 00:14:41.868 clat percentiles (usec): 00:14:41.868 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:14:41.868 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:14:41.868 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:14:41.868 | 99.00th=[ 7504], 99.50th=[ 8586], 99.90th=[10159], 99.95th=[10552], 00:14:41.868 | 99.99th=[11600] 00:14:41.868 bw ( KiB/s): min=37426, max=38144, per=99.87%, avg=37724.50, stdev=328.43, samples=4 00:14:41.868 iops : min= 9356, max= 9536, avg=9431.00, stdev=82.26, samples=4 00:14:41.868 lat (msec) : 2=0.01%, 4=0.12%, 10=99.79%, 20=0.09% 00:14:41.868 cpu : usr=72.67%, sys=19.70%, ctx=24, majf=0, minf=5 00:14:41.868 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:41.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.868 issued rwts: total=18947,18944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.868 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.868 00:14:41.868 Run status group 0 (all jobs): 00:14:41.868 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2006-2006msec 00:14:41.868 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.0MiB (77.6MB), run=2006-2006msec 00:14:41.868 00:26:57 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:41.869 00:26:57 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:41.869 00:26:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:41.869 00:26:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:41.869 00:26:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:41.869 00:26:57 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:41.869 00:26:57 -- common/autotest_common.sh@1320 -- # shift 00:14:41.869 00:26:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:41.869 00:26:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:41.869 00:26:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:41.869 00:26:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:41.869 00:26:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:41.869 00:26:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:41.869 00:26:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:41.869 00:26:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:41.869 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:41.869 fio-3.35 00:14:41.869 Starting 1 thread 00:14:44.399 00:14:44.399 test: (groupid=0, jobs=1): err= 0: pid=69253: Sun Sep 29 00:26:59 2024 00:14:44.399 read: IOPS=8453, BW=132MiB/s (139MB/s)(265MiB/2007msec) 00:14:44.399 slat (usec): min=2, max=126, avg= 3.90, stdev= 2.56 00:14:44.399 clat (usec): min=1866, max=17656, avg=8191.96, stdev=2659.83 00:14:44.399 lat (usec): min=1869, max=17663, avg=8195.86, stdev=2660.00 00:14:44.399 clat percentiles (usec): 00:14:44.399 | 1.00th=[ 3916], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5866], 00:14:44.399 | 30.00th=[ 6521], 40.00th=[ 7177], 50.00th=[ 7701], 60.00th=[ 8356], 00:14:44.399 | 70.00th=[ 9241], 80.00th=[10421], 90.00th=[11863], 95.00th=[13435], 00:14:44.399 | 99.00th=[15664], 99.50th=[16188], 99.90th=[17433], 99.95th=[17695], 00:14:44.399 | 99.99th=[17695] 00:14:44.399 bw ( KiB/s): min=62240, max=81312, per=52.49%, avg=71000.00, stdev=8101.43, samples=4 00:14:44.399 iops : min= 3890, max= 5082, avg=4437.50, stdev=506.34, samples=4 00:14:44.399 write: IOPS=4995, BW=78.1MiB/s (81.8MB/s)(144MiB/1849msec); 0 zone resets 00:14:44.399 slat (usec): min=32, max=364, avg=38.80, stdev= 9.76 00:14:44.399 clat (usec): min=2904, max=20331, avg=11925.04, stdev=2016.15 00:14:44.399 lat (usec): min=2939, max=20367, avg=11963.85, stdev=2018.00 00:14:44.399 clat percentiles (usec): 00:14:44.399 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10290], 00:14:44.399 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12256], 00:14:44.399 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14746], 95.00th=[15533], 00:14:44.399 | 99.00th=[16712], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:14:44.399 | 99.99th=[20317] 00:14:44.399 bw ( KiB/s): min=63776, max=85920, per=92.45%, avg=73896.00, stdev=9271.11, samples=4 00:14:44.399 iops : min= 3986, max= 5370, avg=4618.50, stdev=579.44, samples=4 00:14:44.399 lat (msec) : 2=0.01%, 4=0.84%, 10=54.63%, 20=44.51%, 50=0.01% 00:14:44.399 cpu : usr=78.62%, sys=15.70%, ctx=23, majf=0, minf=10 00:14:44.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:44.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:44.399 issued rwts: total=16966,9237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:44.399 00:14:44.399 Run status group 0 (all jobs): 00:14:44.399 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=265MiB (278MB), run=2007-2007msec 00:14:44.399 WRITE: bw=78.1MiB/s (81.8MB/s), 78.1MiB/s-78.1MiB/s (81.8MB/s-81.8MB/s), io=144MiB (151MB), run=1849-1849msec 00:14:44.399 00:26:59 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.399 00:27:00 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:14:44.399 00:27:00 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:14:44.399 00:27:00 -- host/fio.sh@51 -- # get_nvme_bdfs 00:14:44.399 00:27:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:44.399 00:27:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:14:44.399 00:27:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:44.399 00:27:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:44.399 00:27:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:44.399 00:27:00 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:14:44.399 00:27:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:14:44.658 00:27:00 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:14:44.916 Nvme0n1 00:14:44.916 00:27:00 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:14:45.175 00:27:00 -- host/fio.sh@53 -- # ls_guid=695b0165-282c-4cc6-ac79-0b0bfc93d6e5 00:14:45.175 00:27:00 -- host/fio.sh@54 -- # get_lvs_free_mb 695b0165-282c-4cc6-ac79-0b0bfc93d6e5 00:14:45.175 00:27:00 -- common/autotest_common.sh@1343 -- # local lvs_uuid=695b0165-282c-4cc6-ac79-0b0bfc93d6e5 00:14:45.175 00:27:00 -- common/autotest_common.sh@1344 -- # local lvs_info 00:14:45.175 00:27:00 -- common/autotest_common.sh@1345 -- # local fc 00:14:45.175 00:27:00 -- common/autotest_common.sh@1346 -- # local cs 00:14:45.175 00:27:00 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:45.175 00:27:01 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:45.175 { 00:14:45.175 "uuid": "695b0165-282c-4cc6-ac79-0b0bfc93d6e5", 00:14:45.175 "name": "lvs_0", 00:14:45.175 "base_bdev": "Nvme0n1", 00:14:45.175 "total_data_clusters": 4, 00:14:45.175 "free_clusters": 4, 00:14:45.175 "block_size": 4096, 00:14:45.175 "cluster_size": 1073741824 00:14:45.175 } 00:14:45.175 ]' 00:14:45.434 00:27:01 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="695b0165-282c-4cc6-ac79-0b0bfc93d6e5") .free_clusters' 00:14:45.434 00:27:01 -- common/autotest_common.sh@1348 -- # fc=4 00:14:45.434 00:27:01 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="695b0165-282c-4cc6-ac79-0b0bfc93d6e5") .cluster_size' 00:14:45.434 4096 00:14:45.434 00:27:01 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:14:45.434 00:27:01 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:14:45.434 00:27:01 -- common/autotest_common.sh@1353 -- # echo 4096 00:14:45.434 00:27:01 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:45.692 c1d54683-afb9-4b34-a606-bc485325eb6a 00:14:45.692 00:27:01 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:45.950 00:27:01 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:46.209 00:27:01 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:46.468 00:27:02 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:46.468 00:27:02 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:46.468 00:27:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:46.468 00:27:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:46.468 00:27:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:46.468 00:27:02 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:46.468 00:27:02 -- common/autotest_common.sh@1320 -- # shift 00:14:46.468 00:27:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:46.468 00:27:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:46.468 00:27:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:46.468 00:27:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:46.468 00:27:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:46.468 00:27:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:46.468 00:27:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:46.468 00:27:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:46.468 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:46.468 fio-3.35 00:14:46.468 Starting 1 thread 00:14:48.998 00:14:48.999 test: (groupid=0, jobs=1): err= 0: pid=69357: Sun Sep 29 00:27:04 2024 00:14:48.999 read: IOPS=6389, BW=25.0MiB/s (26.2MB/s)(50.1MiB/2009msec) 00:14:48.999 slat (nsec): min=1985, max=298553, avg=2605.84, stdev=3534.93 00:14:48.999 clat (usec): min=2929, max=17587, avg=10473.58, stdev=878.50 00:14:48.999 lat (usec): min=2938, max=17590, avg=10476.19, stdev=878.25 00:14:48.999 clat percentiles (usec): 00:14:48.999 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:14:48.999 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:14:48.999 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:14:48.999 | 99.00th=[12518], 99.50th=[12780], 99.90th=[15926], 99.95th=[16712], 00:14:48.999 | 99.99th=[16909] 00:14:48.999 bw ( KiB/s): min=24536, max=26056, per=99.95%, avg=25546.00, stdev=690.06, samples=4 00:14:48.999 iops : min= 6134, max= 6514, avg=6386.50, stdev=172.51, samples=4 00:14:48.999 write: IOPS=6390, BW=25.0MiB/s (26.2MB/s)(50.1MiB/2009msec); 0 zone resets 00:14:48.999 slat (usec): min=2, max=215, avg= 2.69, stdev= 2.44 00:14:48.999 clat (usec): min=2235, max=17707, avg=9501.79, stdev=840.09 00:14:48.999 lat (usec): min=2248, max=17709, avg=9504.47, stdev=839.96 00:14:48.999 clat percentiles (usec): 00:14:48.999 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:14:48.999 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:14:48.999 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10814], 00:14:48.999 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15926], 99.95th=[16712], 00:14:48.999 | 99.99th=[17695] 00:14:48.999 bw ( KiB/s): min=25288, max=25936, per=99.96%, avg=25550.00, stdev=312.54, samples=4 00:14:48.999 iops : min= 6322, max= 6484, avg=6387.50, stdev=78.13, samples=4 00:14:48.999 lat (msec) : 4=0.06%, 10=51.34%, 20=48.60% 00:14:48.999 cpu : usr=72.46%, sys=21.36%, ctx=11, majf=0, minf=14 00:14:48.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:48.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:48.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:48.999 issued rwts: total=12837,12838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:48.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:48.999 00:14:48.999 Run status group 0 (all jobs): 00:14:48.999 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.1MiB (52.6MB), run=2009-2009msec 00:14:48.999 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.1MiB (52.6MB), run=2009-2009msec 00:14:48.999 00:27:04 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:49.257 00:27:04 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:49.516 00:27:05 -- host/fio.sh@64 -- # ls_nested_guid=02b9eec0-ac17-477b-8449-92d658ff0c96 00:14:49.516 00:27:05 -- host/fio.sh@65 -- # get_lvs_free_mb 02b9eec0-ac17-477b-8449-92d658ff0c96 00:14:49.516 00:27:05 -- common/autotest_common.sh@1343 -- # local lvs_uuid=02b9eec0-ac17-477b-8449-92d658ff0c96 00:14:49.516 00:27:05 -- common/autotest_common.sh@1344 -- # local lvs_info 00:14:49.516 00:27:05 -- common/autotest_common.sh@1345 -- # local fc 00:14:49.516 00:27:05 -- common/autotest_common.sh@1346 -- # local cs 00:14:49.516 00:27:05 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:49.775 00:27:05 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:49.775 { 00:14:49.775 "uuid": "695b0165-282c-4cc6-ac79-0b0bfc93d6e5", 00:14:49.775 "name": "lvs_0", 00:14:49.775 "base_bdev": "Nvme0n1", 00:14:49.775 "total_data_clusters": 4, 00:14:49.775 "free_clusters": 0, 00:14:49.775 "block_size": 4096, 00:14:49.775 "cluster_size": 1073741824 00:14:49.775 }, 00:14:49.775 { 00:14:49.775 "uuid": "02b9eec0-ac17-477b-8449-92d658ff0c96", 00:14:49.775 "name": "lvs_n_0", 00:14:49.775 "base_bdev": "c1d54683-afb9-4b34-a606-bc485325eb6a", 00:14:49.775 "total_data_clusters": 1022, 00:14:49.775 "free_clusters": 1022, 00:14:49.775 "block_size": 4096, 00:14:49.775 "cluster_size": 4194304 00:14:49.775 } 00:14:49.775 ]' 00:14:49.775 00:27:05 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="02b9eec0-ac17-477b-8449-92d658ff0c96") .free_clusters' 00:14:49.775 00:27:05 -- common/autotest_common.sh@1348 -- # fc=1022 00:14:49.775 00:27:05 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="02b9eec0-ac17-477b-8449-92d658ff0c96") .cluster_size' 00:14:49.775 4088 00:14:49.775 00:27:05 -- common/autotest_common.sh@1349 -- # cs=4194304 00:14:49.775 00:27:05 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:14:49.775 00:27:05 -- common/autotest_common.sh@1353 -- # echo 4088 00:14:49.775 00:27:05 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:14:50.034 948ccdfe-3cd6-43c8-9bb1-c4b92662109e 00:14:50.034 00:27:05 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:14:50.292 00:27:05 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:14:50.551 00:27:06 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:50.809 00:27:06 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:50.809 00:27:06 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:50.809 00:27:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:14:50.809 00:27:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:50.809 00:27:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:14:50.809 00:27:06 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.809 00:27:06 -- common/autotest_common.sh@1320 -- # shift 00:14:50.809 00:27:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:14:50.809 00:27:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:50.809 00:27:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:50.809 00:27:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:14:50.809 00:27:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:14:50.809 00:27:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:14:50.809 00:27:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:50.809 00:27:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:50.809 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:50.809 fio-3.35 00:14:50.809 Starting 1 thread 00:14:53.341 00:14:53.341 test: (groupid=0, jobs=1): err= 0: pid=69441: Sun Sep 29 00:27:08 2024 00:14:53.341 read: IOPS=5772, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2009msec) 00:14:53.341 slat (nsec): min=1986, max=363292, avg=2664.37, stdev=4434.25 00:14:53.341 clat (usec): min=3312, max=20247, avg=11604.09, stdev=978.85 00:14:53.341 lat (usec): min=3322, max=20249, avg=11606.76, stdev=978.47 00:14:53.341 clat percentiles (usec): 00:14:53.341 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:14:53.341 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:14:53.341 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:14:53.341 | 99.00th=[13698], 99.50th=[14222], 99.90th=[19268], 99.95th=[19530], 00:14:53.341 | 99.99th=[20055] 00:14:53.341 bw ( KiB/s): min=22248, max=23528, per=99.83%, avg=23050.00, stdev=557.80, samples=4 00:14:53.341 iops : min= 5562, max= 5882, avg=5762.50, stdev=139.45, samples=4 00:14:53.341 write: IOPS=5756, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:14:53.341 slat (usec): min=2, max=230, avg= 2.77, stdev= 2.86 00:14:53.341 clat (usec): min=2557, max=20270, avg=10498.63, stdev=920.99 00:14:53.341 lat (usec): min=2571, max=20272, avg=10501.40, stdev=920.79 00:14:53.341 clat percentiles (usec): 00:14:53.341 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:14:53.341 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:14:53.341 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:14:53.341 | 99.00th=[12518], 99.50th=[12780], 99.90th=[17171], 99.95th=[19268], 00:14:53.341 | 99.99th=[20317] 00:14:53.341 bw ( KiB/s): min=22952, max=23176, per=99.97%, avg=23020.00, stdev=104.61, samples=4 00:14:53.341 iops : min= 5738, max= 5794, avg=5755.00, stdev=26.15, samples=4 00:14:53.341 lat (msec) : 4=0.05%, 10=15.08%, 20=84.85%, 50=0.03% 00:14:53.341 cpu : usr=74.40%, sys=20.02%, ctx=7, majf=0, minf=14 00:14:53.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:53.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:53.341 issued rwts: total=11597,11565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:53.341 00:14:53.341 Run status group 0 (all jobs): 00:14:53.341 READ: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:14:53.341 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:14:53.341 00:27:08 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:53.341 00:27:09 -- host/fio.sh@74 -- # sync 00:14:53.599 00:27:09 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:14:53.599 00:27:09 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:54.165 00:27:09 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:14:54.165 00:27:09 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:54.423 00:27:10 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:54.989 00:27:10 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:54.989 00:27:10 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:54.989 00:27:10 -- host/fio.sh@86 -- # nvmftestfini 00:14:54.989 00:27:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:54.989 00:27:10 -- nvmf/common.sh@116 -- # sync 00:14:54.989 00:27:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:54.989 00:27:10 -- nvmf/common.sh@119 -- # set +e 00:14:54.989 00:27:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:54.989 00:27:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:54.989 rmmod nvme_tcp 00:14:54.989 rmmod nvme_fabrics 00:14:54.989 rmmod nvme_keyring 00:14:54.989 00:27:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:54.989 00:27:10 -- nvmf/common.sh@123 -- # set -e 00:14:54.989 00:27:10 -- nvmf/common.sh@124 -- # return 0 00:14:54.989 00:27:10 -- nvmf/common.sh@477 -- # '[' -n 69120 ']' 00:14:54.989 00:27:10 -- nvmf/common.sh@478 -- # killprocess 69120 00:14:54.989 00:27:10 -- common/autotest_common.sh@926 -- # '[' -z 69120 ']' 00:14:54.989 00:27:10 -- common/autotest_common.sh@930 -- # kill -0 69120 00:14:55.248 00:27:10 -- common/autotest_common.sh@931 -- # uname 00:14:55.248 00:27:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:55.248 00:27:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69120 00:14:55.248 killing process with pid 69120 00:14:55.248 00:27:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:55.248 00:27:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:55.248 00:27:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69120' 00:14:55.248 00:27:10 -- common/autotest_common.sh@945 -- # kill 69120 00:14:55.248 00:27:10 -- common/autotest_common.sh@950 -- # wait 69120 00:14:55.248 00:27:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:55.248 00:27:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:55.248 00:27:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:55.248 00:27:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.248 00:27:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:55.248 00:27:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.248 00:27:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.248 00:27:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.248 00:27:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:55.248 00:14:55.248 real 0m19.350s 00:14:55.248 user 1m25.726s 00:14:55.248 sys 0m4.201s 00:14:55.248 00:27:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.248 00:27:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.248 ************************************ 00:14:55.248 END TEST nvmf_fio_host 00:14:55.248 ************************************ 00:14:55.508 00:27:11 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:55.508 00:27:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.508 00:27:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.508 00:27:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.508 ************************************ 00:14:55.508 START TEST nvmf_failover 00:14:55.508 ************************************ 00:14:55.508 00:27:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:55.508 * Looking for test storage... 00:14:55.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:55.508 00:27:11 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.508 00:27:11 -- nvmf/common.sh@7 -- # uname -s 00:14:55.508 00:27:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.508 00:27:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.508 00:27:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.508 00:27:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.508 00:27:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.508 00:27:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.508 00:27:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.508 00:27:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.508 00:27:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.508 00:27:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.508 00:27:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:14:55.508 00:27:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:14:55.508 00:27:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.508 00:27:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.508 00:27:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.508 00:27:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.508 00:27:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.508 00:27:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.508 00:27:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.508 00:27:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.508 00:27:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.508 00:27:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.508 00:27:11 -- paths/export.sh@5 -- # export PATH 00:14:55.508 00:27:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.508 00:27:11 -- nvmf/common.sh@46 -- # : 0 00:14:55.508 00:27:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.508 00:27:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.508 00:27:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.508 00:27:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.508 00:27:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.508 00:27:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.508 00:27:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.508 00:27:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.508 00:27:11 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.508 00:27:11 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.508 00:27:11 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.508 00:27:11 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.508 00:27:11 -- host/failover.sh@18 -- # nvmftestinit 00:14:55.508 00:27:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.508 00:27:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.508 00:27:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.508 00:27:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.508 00:27:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.508 00:27:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.508 00:27:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.508 00:27:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.508 00:27:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:55.508 00:27:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:55.508 00:27:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:55.508 00:27:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:55.508 00:27:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:55.508 00:27:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:55.508 00:27:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.508 00:27:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.508 00:27:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.508 00:27:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:55.508 00:27:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.508 00:27:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.508 00:27:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.508 00:27:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.508 00:27:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.508 00:27:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.508 00:27:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.508 00:27:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.508 00:27:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:55.508 00:27:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:55.508 Cannot find device "nvmf_tgt_br" 00:14:55.508 00:27:11 -- nvmf/common.sh@154 -- # true 00:14:55.508 00:27:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.508 Cannot find device "nvmf_tgt_br2" 00:14:55.508 00:27:11 -- nvmf/common.sh@155 -- # true 00:14:55.508 00:27:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:55.508 00:27:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:55.508 Cannot find device "nvmf_tgt_br" 00:14:55.508 00:27:11 -- nvmf/common.sh@157 -- # true 00:14:55.508 00:27:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:55.508 Cannot find device "nvmf_tgt_br2" 00:14:55.508 00:27:11 -- nvmf/common.sh@158 -- # true 00:14:55.508 00:27:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:55.767 00:27:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:55.767 00:27:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.767 00:27:11 -- nvmf/common.sh@161 -- # true 00:14:55.767 00:27:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.767 00:27:11 -- nvmf/common.sh@162 -- # true 00:14:55.767 00:27:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.767 00:27:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.767 00:27:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.767 00:27:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.767 00:27:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.767 00:27:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.767 00:27:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.767 00:27:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:55.767 00:27:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:55.767 00:27:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:55.767 00:27:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:55.767 00:27:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:55.767 00:27:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:55.767 00:27:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.767 00:27:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.767 00:27:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.767 00:27:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:55.767 00:27:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:55.767 00:27:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.767 00:27:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.767 00:27:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.767 00:27:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.767 00:27:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.767 00:27:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:55.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:14:55.767 00:14:55.767 --- 10.0.0.2 ping statistics --- 00:14:55.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.768 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:55.768 00:27:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:55.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:55.768 00:14:55.768 --- 10.0.0.3 ping statistics --- 00:14:55.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.768 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:55.768 00:27:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:55.768 00:14:55.768 --- 10.0.0.1 ping statistics --- 00:14:55.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.768 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:55.768 00:27:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.768 00:27:11 -- nvmf/common.sh@421 -- # return 0 00:14:55.768 00:27:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:55.768 00:27:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.768 00:27:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:55.768 00:27:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:55.768 00:27:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.768 00:27:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:55.768 00:27:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:55.768 00:27:11 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:55.768 00:27:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:55.768 00:27:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:55.768 00:27:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.768 00:27:11 -- nvmf/common.sh@469 -- # nvmfpid=69672 00:14:55.768 00:27:11 -- nvmf/common.sh@470 -- # waitforlisten 69672 00:14:55.768 00:27:11 -- common/autotest_common.sh@819 -- # '[' -z 69672 ']' 00:14:55.768 00:27:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:55.768 00:27:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.768 00:27:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.768 00:27:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.768 00:27:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.768 00:27:11 -- common/autotest_common.sh@10 -- # set +x 00:14:56.026 [2024-09-29 00:27:11.644437] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:14:56.026 [2024-09-29 00:27:11.644538] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.027 [2024-09-29 00:27:11.785141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:56.027 [2024-09-29 00:27:11.843203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.027 [2024-09-29 00:27:11.843401] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.027 [2024-09-29 00:27:11.843417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.027 [2024-09-29 00:27:11.843427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.027 [2024-09-29 00:27:11.843573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.027 [2024-09-29 00:27:11.843998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.027 [2024-09-29 00:27:11.844050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.961 00:27:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.961 00:27:12 -- common/autotest_common.sh@852 -- # return 0 00:14:56.961 00:27:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:56.961 00:27:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:56.961 00:27:12 -- common/autotest_common.sh@10 -- # set +x 00:14:56.961 00:27:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.961 00:27:12 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:57.218 [2024-09-29 00:27:12.885843] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.218 00:27:12 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:57.476 Malloc0 00:14:57.476 00:27:13 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:57.734 00:27:13 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.992 00:27:13 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.251 [2024-09-29 00:27:13.934128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.251 00:27:13 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:58.509 [2024-09-29 00:27:14.166268] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:58.509 00:27:14 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:58.768 [2024-09-29 00:27:14.386507] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:58.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.768 00:27:14 -- host/failover.sh@31 -- # bdevperf_pid=69735 00:14:58.768 00:27:14 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:58.768 00:27:14 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.768 00:27:14 -- host/failover.sh@34 -- # waitforlisten 69735 /var/tmp/bdevperf.sock 00:14:58.768 00:27:14 -- common/autotest_common.sh@819 -- # '[' -z 69735 ']' 00:14:58.768 00:27:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.768 00:27:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:58.768 00:27:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.768 00:27:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:58.768 00:27:14 -- common/autotest_common.sh@10 -- # set +x 00:14:59.714 00:27:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:59.714 00:27:15 -- common/autotest_common.sh@852 -- # return 0 00:14:59.714 00:27:15 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:59.974 NVMe0n1 00:14:59.974 00:27:15 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:00.232 00:15:00.232 00:27:16 -- host/failover.sh@39 -- # run_test_pid=69759 00:15:00.232 00:27:16 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.232 00:27:16 -- host/failover.sh@41 -- # sleep 1 00:15:01.633 00:27:17 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.633 [2024-09-29 00:27:17.291726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.633 [2024-09-29 00:27:17.291935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.291998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 [2024-09-29 00:27:17.292105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c59d00 is same with the state(5) to be set 00:15:01.634 00:27:17 -- host/failover.sh@45 -- # sleep 3 00:15:04.916 00:27:20 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:04.916 00:15:04.916 00:27:20 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:05.175 [2024-09-29 00:27:20.869136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 [2024-09-29 00:27:20.869663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5a3c0 is same with the state(5) to be set 00:15:05.175 00:27:20 -- host/failover.sh@50 -- # sleep 3 00:15:08.462 00:27:23 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.462 [2024-09-29 00:27:24.152473] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.462 00:27:24 -- host/failover.sh@55 -- # sleep 1 00:15:09.400 00:27:25 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:09.659 [2024-09-29 00:27:25.426515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 [2024-09-29 00:27:25.426854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c589f0 is same with the state(5) to be set 00:15:09.659 00:27:25 -- host/failover.sh@59 -- # wait 69759 00:15:16.231 0 00:15:16.231 00:27:31 -- host/failover.sh@61 -- # killprocess 69735 00:15:16.231 00:27:31 -- common/autotest_common.sh@926 -- # '[' -z 69735 ']' 00:15:16.231 00:27:31 -- common/autotest_common.sh@930 -- # kill -0 69735 00:15:16.231 00:27:31 -- common/autotest_common.sh@931 -- # uname 00:15:16.231 00:27:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.231 00:27:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69735 00:15:16.231 killing process with pid 69735 00:15:16.231 00:27:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:16.231 00:27:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:16.231 00:27:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69735' 00:15:16.231 00:27:31 -- common/autotest_common.sh@945 -- # kill 69735 00:15:16.231 00:27:31 -- common/autotest_common.sh@950 -- # wait 69735 00:15:16.231 00:27:31 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:16.231 [2024-09-29 00:27:14.448866] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:16.231 [2024-09-29 00:27:14.449009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69735 ] 00:15:16.231 [2024-09-29 00:27:14.583252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.231 [2024-09-29 00:27:14.638160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.231 Running I/O for 15 seconds... 00:15:16.231 [2024-09-29 00:27:17.292159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.292970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.292984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.293013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.293044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.293074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.293103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.293144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.231 [2024-09-29 00:27:17.293183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.231 [2024-09-29 00:27:17.293199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.293764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.293968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.293989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.232 [2024-09-29 00:27:17.294405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.232 [2024-09-29 00:27:17.294454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.232 [2024-09-29 00:27:17.294468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.294530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.294680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.294709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.294739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.294981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.294996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.233 [2024-09-29 00:27:17.295513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.233 [2024-09-29 00:27:17.295656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.233 [2024-09-29 00:27:17.295671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.295884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.295944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.295974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.295989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.296009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.296301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.296394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.234 [2024-09-29 00:27:17.296423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.296463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.296494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:17.296524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe92970 is same with the state(5) to be set 00:15:16.234 [2024-09-29 00:27:17.296557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.234 [2024-09-29 00:27:17.296568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.234 [2024-09-29 00:27:17.296594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122832 len:8 PRP1 0x0 PRP2 0x0 00:15:16.234 [2024-09-29 00:27:17.296611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296660] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe92970 was disconnected and freed. reset controller. 00:15:16.234 [2024-09-29 00:27:17.296709] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:16.234 [2024-09-29 00:27:17.296767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.234 [2024-09-29 00:27:17.296790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.234 [2024-09-29 00:27:17.296819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.234 [2024-09-29 00:27:17.296845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.234 [2024-09-29 00:27:17.296872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:17.296885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.234 [2024-09-29 00:27:17.299497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.234 [2024-09-29 00:27:17.299535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f690 (9): Bad file descriptor 00:15:16.234 [2024-09-29 00:27:17.331731] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.234 [2024-09-29 00:27:20.869790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:20.869846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:20.869898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:20.869918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:20.869936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:20.869965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:20.869985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:20.869999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:20.870015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.234 [2024-09-29 00:27:20.870029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.234 [2024-09-29 00:27:20.870045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.235 [2024-09-29 00:27:20.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.235 [2024-09-29 00:27:20.870759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.235 [2024-09-29 00:27:20.870866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.235 [2024-09-29 00:27:20.870959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.870975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.870990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.871005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.235 [2024-09-29 00:27:20.871020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.871035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.871050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.871067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.235 [2024-09-29 00:27:20.871081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.235 [2024-09-29 00:27:20.871097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.871853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.236 [2024-09-29 00:27:20.871972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.871987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.236 [2024-09-29 00:27:20.872491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.236 [2024-09-29 00:27:20.872507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.872531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.872727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.872825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.872942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.872973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.872996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.237 [2024-09-29 00:27:20.873803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.237 [2024-09-29 00:27:20.873840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.237 [2024-09-29 00:27:20.873857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.873871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.873887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.873901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.873916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.238 [2024-09-29 00:27:20.873931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.873946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.238 [2024-09-29 00:27:20.873960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.873976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.238 [2024-09-29 00:27:20.873990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.238 [2024-09-29 00:27:20.874020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.874049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.874079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.874112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.874157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.874186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:20.874215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe77450 is same with the state(5) to be set 00:15:16.238 [2024-09-29 00:27:20.874253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.238 [2024-09-29 00:27:20.874265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.238 [2024-09-29 00:27:20.874277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124312 len:8 PRP1 0x0 PRP2 0x0 00:15:16.238 [2024-09-29 00:27:20.874290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874337] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe77450 was disconnected and freed. reset controller. 00:15:16.238 [2024-09-29 00:27:20.874372] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:16.238 [2024-09-29 00:27:20.874442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.238 [2024-09-29 00:27:20.874466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.238 [2024-09-29 00:27:20.874495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.238 [2024-09-29 00:27:20.874523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.238 [2024-09-29 00:27:20.874551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:20.874564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.238 [2024-09-29 00:27:20.874598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f690 (9): Bad file descriptor 00:15:16.238 [2024-09-29 00:27:20.877054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.238 [2024-09-29 00:27:20.907145] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.238 [2024-09-29 00:27:25.426938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.426992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.238 [2024-09-29 00:27:25.427619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.238 [2024-09-29 00:27:25.427634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.427905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.427936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.427966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.427990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.239 [2024-09-29 00:27:25.428895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.239 [2024-09-29 00:27:25.428952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.239 [2024-09-29 00:27:25.428967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.428981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.428996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.240 [2024-09-29 00:27:25.429898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.240 [2024-09-29 00:27:25.429969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.240 [2024-09-29 00:27:25.429982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:16.241 [2024-09-29 00:27:25.430949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.430977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.430991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:16.241 [2024-09-29 00:27:25.431004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.431019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27cc0 is same with the state(5) to be set 00:15:16.241 [2024-09-29 00:27:25.431036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:16.241 [2024-09-29 00:27:25.431047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:16.241 [2024-09-29 00:27:25.431058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103576 len:8 PRP1 0x0 PRP2 0x0 00:15:16.241 [2024-09-29 00:27:25.431071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.241 [2024-09-29 00:27:25.431116] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe27cc0 was disconnected and freed. reset controller. 00:15:16.241 [2024-09-29 00:27:25.431142] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:16.241 [2024-09-29 00:27:25.431198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.241 [2024-09-29 00:27:25.431219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.242 [2024-09-29 00:27:25.431234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.242 [2024-09-29 00:27:25.431247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.242 [2024-09-29 00:27:25.431260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.242 [2024-09-29 00:27:25.431273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.242 [2024-09-29 00:27:25.431287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.242 [2024-09-29 00:27:25.431300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.242 [2024-09-29 00:27:25.431313] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:16.242 [2024-09-29 00:27:25.431389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2f690 (9): Bad file descriptor 00:15:16.242 [2024-09-29 00:27:25.433706] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:16.242 [2024-09-29 00:27:25.468540] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:16.242 00:15:16.242 Latency(us) 00:15:16.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.242 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:16.242 Verification LBA range: start 0x0 length 0x4000 00:15:16.242 NVMe0n1 : 15.01 13469.07 52.61 319.65 0.00 9264.85 422.63 14656.23 00:15:16.242 =================================================================================================================== 00:15:16.242 Total : 13469.07 52.61 319.65 0.00 9264.85 422.63 14656.23 00:15:16.242 Received shutdown signal, test time was about 15.000000 seconds 00:15:16.242 00:15:16.242 Latency(us) 00:15:16.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.242 =================================================================================================================== 00:15:16.242 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.242 00:27:31 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:16.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.242 00:27:31 -- host/failover.sh@65 -- # count=3 00:15:16.242 00:27:31 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:16.242 00:27:31 -- host/failover.sh@73 -- # bdevperf_pid=69938 00:15:16.242 00:27:31 -- host/failover.sh@75 -- # waitforlisten 69938 /var/tmp/bdevperf.sock 00:15:16.242 00:27:31 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:16.242 00:27:31 -- common/autotest_common.sh@819 -- # '[' -z 69938 ']' 00:15:16.242 00:27:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.242 00:27:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:16.242 00:27:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.242 00:27:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:16.242 00:27:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.809 00:27:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:16.809 00:27:32 -- common/autotest_common.sh@852 -- # return 0 00:15:16.809 00:27:32 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:17.068 [2024-09-29 00:27:32.690422] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:17.068 00:27:32 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:17.327 [2024-09-29 00:27:32.958672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:17.327 00:27:32 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.586 NVMe0n1 00:15:17.586 00:27:33 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.845 00:15:17.845 00:27:33 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.104 00:15:18.104 00:27:33 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:18.104 00:27:33 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:18.362 00:27:34 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:18.929 00:27:34 -- host/failover.sh@87 -- # sleep 3 00:15:22.265 00:27:37 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:22.265 00:27:37 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:22.265 00:27:37 -- host/failover.sh@90 -- # run_test_pid=70015 00:15:22.265 00:27:37 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:22.265 00:27:37 -- host/failover.sh@92 -- # wait 70015 00:15:23.200 0 00:15:23.200 00:27:38 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:23.200 [2024-09-29 00:27:31.464929] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:23.200 [2024-09-29 00:27:31.465045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69938 ] 00:15:23.200 [2024-09-29 00:27:31.602442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.200 [2024-09-29 00:27:31.664220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.200 [2024-09-29 00:27:34.464230] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:23.200 [2024-09-29 00:27:34.464394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.200 [2024-09-29 00:27:34.464423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.200 [2024-09-29 00:27:34.464443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.200 [2024-09-29 00:27:34.464458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.200 [2024-09-29 00:27:34.464474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.200 [2024-09-29 00:27:34.464489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.200 [2024-09-29 00:27:34.464504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.200 [2024-09-29 00:27:34.464518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.200 [2024-09-29 00:27:34.464533] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:23.200 [2024-09-29 00:27:34.464587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.200 [2024-09-29 00:27:34.464621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2363690 (9): Bad file descriptor 00:15:23.200 [2024-09-29 00:27:34.475994] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.200 Running I/O for 1 seconds... 00:15:23.200 00:15:23.200 Latency(us) 00:15:23.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.200 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:23.200 Verification LBA range: start 0x0 length 0x4000 00:15:23.201 NVMe0n1 : 1.01 13458.03 52.57 0.00 0.00 9464.25 916.01 15013.70 00:15:23.201 =================================================================================================================== 00:15:23.201 Total : 13458.03 52.57 0.00 0.00 9464.25 916.01 15013.70 00:15:23.201 00:27:38 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:23.201 00:27:38 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:23.459 00:27:39 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:23.717 00:27:39 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:23.717 00:27:39 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:23.974 00:27:39 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:24.232 00:27:39 -- host/failover.sh@101 -- # sleep 3 00:15:27.514 00:27:42 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:27.514 00:27:42 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:27.514 00:27:43 -- host/failover.sh@108 -- # killprocess 69938 00:15:27.514 00:27:43 -- common/autotest_common.sh@926 -- # '[' -z 69938 ']' 00:15:27.514 00:27:43 -- common/autotest_common.sh@930 -- # kill -0 69938 00:15:27.514 00:27:43 -- common/autotest_common.sh@931 -- # uname 00:15:27.514 00:27:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.514 00:27:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69938 00:15:27.514 00:27:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:27.514 00:27:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:27.514 killing process with pid 69938 00:15:27.514 00:27:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69938' 00:15:27.514 00:27:43 -- common/autotest_common.sh@945 -- # kill 69938 00:15:27.514 00:27:43 -- common/autotest_common.sh@950 -- # wait 69938 00:15:27.514 00:27:43 -- host/failover.sh@110 -- # sync 00:15:27.514 00:27:43 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.773 00:27:43 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:27.773 00:27:43 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:27.773 00:27:43 -- host/failover.sh@116 -- # nvmftestfini 00:15:27.773 00:27:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:27.773 00:27:43 -- nvmf/common.sh@116 -- # sync 00:15:27.773 00:27:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:27.773 00:27:43 -- nvmf/common.sh@119 -- # set +e 00:15:27.773 00:27:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:27.773 00:27:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:27.773 rmmod nvme_tcp 00:15:27.773 rmmod nvme_fabrics 00:15:28.032 rmmod nvme_keyring 00:15:28.032 00:27:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:28.032 00:27:43 -- nvmf/common.sh@123 -- # set -e 00:15:28.032 00:27:43 -- nvmf/common.sh@124 -- # return 0 00:15:28.032 00:27:43 -- nvmf/common.sh@477 -- # '[' -n 69672 ']' 00:15:28.032 00:27:43 -- nvmf/common.sh@478 -- # killprocess 69672 00:15:28.032 00:27:43 -- common/autotest_common.sh@926 -- # '[' -z 69672 ']' 00:15:28.032 00:27:43 -- common/autotest_common.sh@930 -- # kill -0 69672 00:15:28.032 00:27:43 -- common/autotest_common.sh@931 -- # uname 00:15:28.032 00:27:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:28.032 00:27:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69672 00:15:28.032 00:27:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:28.032 killing process with pid 69672 00:15:28.032 00:27:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:28.032 00:27:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69672' 00:15:28.032 00:27:43 -- common/autotest_common.sh@945 -- # kill 69672 00:15:28.032 00:27:43 -- common/autotest_common.sh@950 -- # wait 69672 00:15:28.032 00:27:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:28.032 00:27:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:28.032 00:27:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:28.032 00:27:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.032 00:27:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:28.032 00:27:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.032 00:27:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.032 00:27:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.292 00:27:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:28.292 ************************************ 00:15:28.292 END TEST nvmf_failover 00:15:28.292 ************************************ 00:15:28.292 00:15:28.292 real 0m32.770s 00:15:28.292 user 2m7.295s 00:15:28.292 sys 0m5.469s 00:15:28.292 00:27:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.292 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:15:28.292 00:27:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:28.292 00:27:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:28.292 00:27:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.292 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:15:28.292 ************************************ 00:15:28.292 START TEST nvmf_discovery 00:15:28.292 ************************************ 00:15:28.292 00:27:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:28.292 * Looking for test storage... 00:15:28.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:28.292 00:27:44 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.292 00:27:44 -- nvmf/common.sh@7 -- # uname -s 00:15:28.292 00:27:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.292 00:27:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.292 00:27:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.292 00:27:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.292 00:27:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.292 00:27:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.292 00:27:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.292 00:27:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.292 00:27:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.292 00:27:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.292 00:27:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:15:28.292 00:27:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:15:28.292 00:27:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.292 00:27:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.292 00:27:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.292 00:27:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.292 00:27:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.292 00:27:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.292 00:27:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.292 00:27:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.292 00:27:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.292 00:27:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.292 00:27:44 -- paths/export.sh@5 -- # export PATH 00:15:28.292 00:27:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.292 00:27:44 -- nvmf/common.sh@46 -- # : 0 00:15:28.292 00:27:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:28.292 00:27:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:28.292 00:27:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:28.292 00:27:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.292 00:27:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.292 00:27:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:28.292 00:27:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:28.292 00:27:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:28.292 00:27:44 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:28.292 00:27:44 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:28.292 00:27:44 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:28.292 00:27:44 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:28.292 00:27:44 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:28.292 00:27:44 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:28.292 00:27:44 -- host/discovery.sh@25 -- # nvmftestinit 00:15:28.292 00:27:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:28.292 00:27:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.292 00:27:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:28.292 00:27:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:28.292 00:27:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:28.292 00:27:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.292 00:27:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.292 00:27:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.292 00:27:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:28.292 00:27:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:28.292 00:27:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:28.292 00:27:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:28.292 00:27:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:28.292 00:27:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:28.292 00:27:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.292 00:27:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.292 00:27:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:28.292 00:27:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:28.292 00:27:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:28.292 00:27:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:28.292 00:27:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:28.292 00:27:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.292 00:27:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:28.292 00:27:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:28.292 00:27:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:28.292 00:27:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:28.292 00:27:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:28.292 00:27:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:28.292 Cannot find device "nvmf_tgt_br" 00:15:28.292 00:27:44 -- nvmf/common.sh@154 -- # true 00:15:28.292 00:27:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.292 Cannot find device "nvmf_tgt_br2" 00:15:28.292 00:27:44 -- nvmf/common.sh@155 -- # true 00:15:28.292 00:27:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:28.292 00:27:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:28.292 Cannot find device "nvmf_tgt_br" 00:15:28.292 00:27:44 -- nvmf/common.sh@157 -- # true 00:15:28.292 00:27:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:28.292 Cannot find device "nvmf_tgt_br2" 00:15:28.292 00:27:44 -- nvmf/common.sh@158 -- # true 00:15:28.292 00:27:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:28.552 00:27:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:28.552 00:27:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.552 00:27:44 -- nvmf/common.sh@161 -- # true 00:15:28.552 00:27:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:28.552 00:27:44 -- nvmf/common.sh@162 -- # true 00:15:28.552 00:27:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:28.552 00:27:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:28.552 00:27:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:28.552 00:27:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:28.552 00:27:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:28.552 00:27:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:28.552 00:27:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:28.552 00:27:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:28.552 00:27:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:28.552 00:27:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:28.552 00:27:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:28.552 00:27:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:28.552 00:27:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:28.552 00:27:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:28.552 00:27:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:28.552 00:27:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:28.552 00:27:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:28.552 00:27:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:28.552 00:27:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:28.552 00:27:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.552 00:27:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.552 00:27:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.552 00:27:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.552 00:27:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:28.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:15:28.552 00:15:28.552 --- 10.0.0.2 ping statistics --- 00:15:28.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.552 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:28.552 00:27:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:28.552 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.552 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:28.552 00:15:28.552 --- 10.0.0.3 ping statistics --- 00:15:28.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.552 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:28.552 00:27:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:28.552 00:15:28.552 --- 10.0.0.1 ping statistics --- 00:15:28.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.552 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:28.552 00:27:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.552 00:27:44 -- nvmf/common.sh@421 -- # return 0 00:15:28.552 00:27:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:28.552 00:27:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.552 00:27:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:28.552 00:27:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:28.552 00:27:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.552 00:27:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:28.552 00:27:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:28.812 00:27:44 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:28.812 00:27:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:28.812 00:27:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:28.812 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.812 00:27:44 -- nvmf/common.sh@469 -- # nvmfpid=70278 00:15:28.812 00:27:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.812 00:27:44 -- nvmf/common.sh@470 -- # waitforlisten 70278 00:15:28.812 00:27:44 -- common/autotest_common.sh@819 -- # '[' -z 70278 ']' 00:15:28.812 00:27:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.812 00:27:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.812 00:27:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.812 00:27:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.812 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.812 [2024-09-29 00:27:44.454531] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:28.812 [2024-09-29 00:27:44.454623] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.812 [2024-09-29 00:27:44.579949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.812 [2024-09-29 00:27:44.635190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:28.812 [2024-09-29 00:27:44.635609] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.812 [2024-09-29 00:27:44.635721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.812 [2024-09-29 00:27:44.635818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.812 [2024-09-29 00:27:44.635913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.750 00:27:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.750 00:27:45 -- common/autotest_common.sh@852 -- # return 0 00:15:29.750 00:27:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:29.750 00:27:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 00:27:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.750 00:27:45 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.750 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 [2024-09-29 00:27:45.468902] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.750 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.750 00:27:45 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:29.750 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 [2024-09-29 00:27:45.480989] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:29.750 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.750 00:27:45 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:29.750 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 null0 00:15:29.750 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.750 00:27:45 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:29.750 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 null1 00:15:29.750 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.750 00:27:45 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:29.750 00:27:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 00:27:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.750 00:27:45 -- host/discovery.sh@45 -- # hostpid=70316 00:15:29.750 00:27:45 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:29.750 00:27:45 -- host/discovery.sh@46 -- # waitforlisten 70316 /tmp/host.sock 00:15:29.750 00:27:45 -- common/autotest_common.sh@819 -- # '[' -z 70316 ']' 00:15:29.750 00:27:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:15:29.750 00:27:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:29.750 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:29.750 00:27:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:29.750 00:27:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:29.750 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.750 [2024-09-29 00:27:45.559598] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:29.750 [2024-09-29 00:27:45.559869] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70316 ] 00:15:30.009 [2024-09-29 00:27:45.694221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.009 [2024-09-29 00:27:45.762592] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:30.009 [2024-09-29 00:27:45.763037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.946 00:27:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:30.946 00:27:46 -- common/autotest_common.sh@852 -- # return 0 00:15:30.946 00:27:46 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:30.946 00:27:46 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@72 -- # notify_id=0 00:15:30.946 00:27:46 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # sort 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # xargs 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:30.946 00:27:46 -- host/discovery.sh@79 -- # get_bdev_list 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # sort 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # xargs 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:30.946 00:27:46 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # sort 00:15:30.946 00:27:46 -- host/discovery.sh@59 -- # xargs 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:15:30.946 00:27:46 -- host/discovery.sh@83 -- # get_bdev_list 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # sort 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- host/discovery.sh@55 -- # xargs 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.946 00:27:46 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:30.946 00:27:46 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:30.946 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.946 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:30.946 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.216 00:27:46 -- host/discovery.sh@86 -- # get_subsystem_names 00:15:31.216 00:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.216 00:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.216 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.216 00:27:46 -- host/discovery.sh@59 -- # sort 00:15:31.216 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:31.216 00:27:46 -- host/discovery.sh@59 -- # xargs 00:15:31.216 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.216 00:27:46 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:15:31.216 00:27:46 -- host/discovery.sh@87 -- # get_bdev_list 00:15:31.216 00:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.216 00:27:46 -- host/discovery.sh@55 -- # sort 00:15:31.216 00:27:46 -- host/discovery.sh@55 -- # xargs 00:15:31.216 00:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.216 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.216 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:31.216 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.216 00:27:46 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:31.216 00:27:46 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:31.216 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.217 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:31.217 [2024-09-29 00:27:46.909498] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.217 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.217 00:27:46 -- host/discovery.sh@92 -- # get_subsystem_names 00:15:31.217 00:27:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:31.217 00:27:46 -- host/discovery.sh@59 -- # sort 00:15:31.217 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.217 00:27:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:31.217 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:31.217 00:27:46 -- host/discovery.sh@59 -- # xargs 00:15:31.217 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.217 00:27:46 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:31.217 00:27:46 -- host/discovery.sh@93 -- # get_bdev_list 00:15:31.217 00:27:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:31.217 00:27:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:31.217 00:27:46 -- host/discovery.sh@55 -- # sort 00:15:31.217 00:27:46 -- host/discovery.sh@55 -- # xargs 00:15:31.217 00:27:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.217 00:27:46 -- common/autotest_common.sh@10 -- # set +x 00:15:31.217 00:27:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.217 00:27:47 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:15:31.217 00:27:47 -- host/discovery.sh@94 -- # get_notification_count 00:15:31.217 00:27:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:31.217 00:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.217 00:27:47 -- common/autotest_common.sh@10 -- # set +x 00:15:31.217 00:27:47 -- host/discovery.sh@74 -- # jq '. | length' 00:15:31.217 00:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.475 00:27:47 -- host/discovery.sh@74 -- # notification_count=0 00:15:31.475 00:27:47 -- host/discovery.sh@75 -- # notify_id=0 00:15:31.475 00:27:47 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:15:31.475 00:27:47 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:31.475 00:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.475 00:27:47 -- common/autotest_common.sh@10 -- # set +x 00:15:31.475 00:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.475 00:27:47 -- host/discovery.sh@100 -- # sleep 1 00:15:31.734 [2024-09-29 00:27:47.552471] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:31.734 [2024-09-29 00:27:47.552506] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:31.734 [2024-09-29 00:27:47.552527] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:31.734 [2024-09-29 00:27:47.558543] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:31.993 [2024-09-29 00:27:47.614239] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:31.993 [2024-09-29 00:27:47.614264] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:32.252 00:27:48 -- host/discovery.sh@101 -- # get_subsystem_names 00:15:32.252 00:27:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.252 00:27:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.252 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.252 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.252 00:27:48 -- host/discovery.sh@59 -- # sort 00:15:32.252 00:27:48 -- host/discovery.sh@59 -- # xargs 00:15:32.510 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.510 00:27:48 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.510 00:27:48 -- host/discovery.sh@102 -- # get_bdev_list 00:15:32.510 00:27:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.510 00:27:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.510 00:27:48 -- host/discovery.sh@55 -- # sort 00:15:32.510 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.510 00:27:48 -- host/discovery.sh@55 -- # xargs 00:15:32.510 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.510 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.510 00:27:48 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:32.510 00:27:48 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:15:32.510 00:27:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:32.510 00:27:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:32.510 00:27:48 -- host/discovery.sh@63 -- # sort -n 00:15:32.510 00:27:48 -- host/discovery.sh@63 -- # xargs 00:15:32.510 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.510 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.510 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.510 00:27:48 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:15:32.510 00:27:48 -- host/discovery.sh@104 -- # get_notification_count 00:15:32.510 00:27:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:32.510 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.510 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.510 00:27:48 -- host/discovery.sh@74 -- # jq '. | length' 00:15:32.511 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.511 00:27:48 -- host/discovery.sh@74 -- # notification_count=1 00:15:32.511 00:27:48 -- host/discovery.sh@75 -- # notify_id=1 00:15:32.511 00:27:48 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:15:32.511 00:27:48 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:32.511 00:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:32.511 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:15:32.511 00:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:32.769 00:27:48 -- host/discovery.sh@109 -- # sleep 1 00:15:33.704 00:27:49 -- host/discovery.sh@110 -- # get_bdev_list 00:15:33.704 00:27:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.704 00:27:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.704 00:27:49 -- host/discovery.sh@55 -- # sort 00:15:33.704 00:27:49 -- host/discovery.sh@55 -- # xargs 00:15:33.704 00:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.704 00:27:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 00:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.704 00:27:49 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:33.704 00:27:49 -- host/discovery.sh@111 -- # get_notification_count 00:15:33.704 00:27:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:33.704 00:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.704 00:27:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 00:27:49 -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.704 00:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.704 00:27:49 -- host/discovery.sh@74 -- # notification_count=1 00:15:33.704 00:27:49 -- host/discovery.sh@75 -- # notify_id=2 00:15:33.704 00:27:49 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:15:33.704 00:27:49 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:33.704 00:27:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:33.704 00:27:49 -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 [2024-09-29 00:27:49.476244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:33.704 [2024-09-29 00:27:49.476502] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:33.704 [2024-09-29 00:27:49.476536] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:33.704 00:27:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:33.704 00:27:49 -- host/discovery.sh@117 -- # sleep 1 00:15:33.704 [2024-09-29 00:27:49.482492] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:33.704 [2024-09-29 00:27:49.545808] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:33.705 [2024-09-29 00:27:49.545833] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:33.705 [2024-09-29 00:27:49.545839] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:34.640 00:27:50 -- host/discovery.sh@118 -- # get_subsystem_names 00:15:34.640 00:27:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.640 00:27:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.640 00:27:50 -- host/discovery.sh@59 -- # sort 00:15:34.640 00:27:50 -- host/discovery.sh@59 -- # xargs 00:15:34.640 00:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.899 00:27:50 -- common/autotest_common.sh@10 -- # set +x 00:15:34.899 00:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@119 -- # get_bdev_list 00:15:34.899 00:27:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.899 00:27:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.899 00:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.899 00:27:50 -- common/autotest_common.sh@10 -- # set +x 00:15:34.899 00:27:50 -- host/discovery.sh@55 -- # xargs 00:15:34.899 00:27:50 -- host/discovery.sh@55 -- # sort 00:15:34.899 00:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:15:34.899 00:27:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:34.899 00:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.899 00:27:50 -- common/autotest_common.sh@10 -- # set +x 00:15:34.899 00:27:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:34.899 00:27:50 -- host/discovery.sh@63 -- # sort -n 00:15:34.899 00:27:50 -- host/discovery.sh@63 -- # xargs 00:15:34.899 00:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@121 -- # get_notification_count 00:15:34.899 00:27:50 -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.899 00:27:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:34.899 00:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.899 00:27:50 -- common/autotest_common.sh@10 -- # set +x 00:15:34.899 00:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@74 -- # notification_count=0 00:15:34.899 00:27:50 -- host/discovery.sh@75 -- # notify_id=2 00:15:34.899 00:27:50 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:34.899 00:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:34.899 00:27:50 -- common/autotest_common.sh@10 -- # set +x 00:15:34.899 [2024-09-29 00:27:50.702950] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:34.899 [2024-09-29 00:27:50.703004] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:34.899 00:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:34.899 00:27:50 -- host/discovery.sh@127 -- # sleep 1 00:15:34.899 [2024-09-29 00:27:50.708944] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:34.899 [2024-09-29 00:27:50.708997] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:34.899 [2024-09-29 00:27:50.709101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.899 [2024-09-29 00:27:50.709133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.899 [2024-09-29 00:27:50.709163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.899 [2024-09-29 00:27:50.709172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.899 [2024-09-29 00:27:50.709181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.899 [2024-09-29 00:27:50.709190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.899 [2024-09-29 00:27:50.709199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:34.899 [2024-09-29 00:27:50.709208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:34.899 [2024-09-29 00:27:50.709217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3ac10 is same with the state(5) to be set 00:15:36.277 00:27:51 -- host/discovery.sh@128 -- # get_subsystem_names 00:15:36.277 00:27:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:36.277 00:27:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:36.277 00:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.277 00:27:51 -- host/discovery.sh@59 -- # sort 00:15:36.277 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 00:27:51 -- host/discovery.sh@59 -- # xargs 00:15:36.277 00:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@129 -- # get_bdev_list 00:15:36.277 00:27:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:36.277 00:27:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:36.277 00:27:51 -- host/discovery.sh@55 -- # sort 00:15:36.277 00:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.277 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 00:27:51 -- host/discovery.sh@55 -- # xargs 00:15:36.277 00:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:15:36.277 00:27:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:36.277 00:27:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:36.277 00:27:51 -- host/discovery.sh@63 -- # sort -n 00:15:36.277 00:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.277 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 00:27:51 -- host/discovery.sh@63 -- # xargs 00:15:36.277 00:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@131 -- # get_notification_count 00:15:36.277 00:27:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:36.277 00:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.277 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 00:27:51 -- host/discovery.sh@74 -- # jq '. | length' 00:15:36.277 00:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@74 -- # notification_count=0 00:15:36.277 00:27:51 -- host/discovery.sh@75 -- # notify_id=2 00:15:36.277 00:27:51 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:36.277 00:27:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:36.277 00:27:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.277 00:27:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:36.277 00:27:51 -- host/discovery.sh@135 -- # sleep 1 00:15:37.213 00:27:52 -- host/discovery.sh@136 -- # get_subsystem_names 00:15:37.213 00:27:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:37.213 00:27:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:37.213 00:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.213 00:27:52 -- common/autotest_common.sh@10 -- # set +x 00:15:37.213 00:27:52 -- host/discovery.sh@59 -- # sort 00:15:37.213 00:27:52 -- host/discovery.sh@59 -- # xargs 00:15:37.213 00:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.213 00:27:53 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:15:37.213 00:27:53 -- host/discovery.sh@137 -- # get_bdev_list 00:15:37.213 00:27:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:37.213 00:27:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:37.213 00:27:53 -- host/discovery.sh@55 -- # sort 00:15:37.213 00:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.213 00:27:53 -- common/autotest_common.sh@10 -- # set +x 00:15:37.213 00:27:53 -- host/discovery.sh@55 -- # xargs 00:15:37.213 00:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.213 00:27:53 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:15:37.213 00:27:53 -- host/discovery.sh@138 -- # get_notification_count 00:15:37.213 00:27:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:37.472 00:27:53 -- host/discovery.sh@74 -- # jq '. | length' 00:15:37.472 00:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.472 00:27:53 -- common/autotest_common.sh@10 -- # set +x 00:15:37.472 00:27:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.472 00:27:53 -- host/discovery.sh@74 -- # notification_count=2 00:15:37.472 00:27:53 -- host/discovery.sh@75 -- # notify_id=4 00:15:37.472 00:27:53 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:15:37.472 00:27:53 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:37.472 00:27:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.472 00:27:53 -- common/autotest_common.sh@10 -- # set +x 00:15:38.406 [2024-09-29 00:27:54.124669] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:38.406 [2024-09-29 00:27:54.124705] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:38.406 [2024-09-29 00:27:54.124770] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:38.406 [2024-09-29 00:27:54.130719] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:38.406 [2024-09-29 00:27:54.189848] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:38.406 [2024-09-29 00:27:54.189907] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:38.406 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.407 00:27:54 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.407 00:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:15:38.407 00:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.407 00:27:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:38.407 00:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:38.407 00:27:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:38.407 00:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:38.407 00:27:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.407 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.407 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.407 request: 00:15:38.407 { 00:15:38.407 "name": "nvme", 00:15:38.407 "trtype": "tcp", 00:15:38.407 "traddr": "10.0.0.2", 00:15:38.407 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:38.407 "adrfam": "ipv4", 00:15:38.407 "trsvcid": "8009", 00:15:38.407 "wait_for_attach": true, 00:15:38.407 "method": "bdev_nvme_start_discovery", 00:15:38.407 "req_id": 1 00:15:38.407 } 00:15:38.407 Got JSON-RPC error response 00:15:38.407 response: 00:15:38.407 { 00:15:38.407 "code": -17, 00:15:38.407 "message": "File exists" 00:15:38.407 } 00:15:38.407 00:27:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:38.407 00:27:54 -- common/autotest_common.sh@643 -- # es=1 00:15:38.407 00:27:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:38.407 00:27:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:38.407 00:27:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:38.407 00:27:54 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:15:38.407 00:27:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:38.407 00:27:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:38.407 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.407 00:27:54 -- host/discovery.sh@67 -- # xargs 00:15:38.407 00:27:54 -- host/discovery.sh@67 -- # sort 00:15:38.407 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.407 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.664 00:27:54 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:15:38.664 00:27:54 -- host/discovery.sh@147 -- # get_bdev_list 00:15:38.664 00:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.664 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.664 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 00:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.664 00:27:54 -- host/discovery.sh@55 -- # sort 00:15:38.664 00:27:54 -- host/discovery.sh@55 -- # xargs 00:15:38.664 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.664 00:27:54 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:38.664 00:27:54 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.664 00:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:15:38.664 00:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.664 00:27:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:38.664 00:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:38.664 00:27:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:38.664 00:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:38.664 00:27:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:38.664 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.664 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 request: 00:15:38.664 { 00:15:38.664 "name": "nvme_second", 00:15:38.664 "trtype": "tcp", 00:15:38.664 "traddr": "10.0.0.2", 00:15:38.664 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:38.664 "adrfam": "ipv4", 00:15:38.664 "trsvcid": "8009", 00:15:38.664 "wait_for_attach": true, 00:15:38.664 "method": "bdev_nvme_start_discovery", 00:15:38.664 "req_id": 1 00:15:38.664 } 00:15:38.664 Got JSON-RPC error response 00:15:38.664 response: 00:15:38.664 { 00:15:38.664 "code": -17, 00:15:38.664 "message": "File exists" 00:15:38.664 } 00:15:38.664 00:27:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:38.664 00:27:54 -- common/autotest_common.sh@643 -- # es=1 00:15:38.664 00:27:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:38.664 00:27:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:38.664 00:27:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:38.664 00:27:54 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:15:38.664 00:27:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:38.664 00:27:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:38.664 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.664 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.664 00:27:54 -- host/discovery.sh@67 -- # sort 00:15:38.664 00:27:54 -- host/discovery.sh@67 -- # xargs 00:15:38.664 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.664 00:27:54 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:15:38.664 00:27:54 -- host/discovery.sh@153 -- # get_bdev_list 00:15:38.664 00:27:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:38.664 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.664 00:27:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:38.664 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.665 00:27:54 -- host/discovery.sh@55 -- # sort 00:15:38.665 00:27:54 -- host/discovery.sh@55 -- # xargs 00:15:38.665 00:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.665 00:27:54 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:38.665 00:27:54 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:38.665 00:27:54 -- common/autotest_common.sh@640 -- # local es=0 00:15:38.665 00:27:54 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:38.665 00:27:54 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:15:38.665 00:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:38.665 00:27:54 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:15:38.665 00:27:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:38.665 00:27:54 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:38.665 00:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.665 00:27:54 -- common/autotest_common.sh@10 -- # set +x 00:15:40.073 [2024-09-29 00:27:55.456055] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.073 [2024-09-29 00:27:55.456207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.073 [2024-09-29 00:27:55.456250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.074 [2024-09-29 00:27:55.456265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8c270 with addr=10.0.0.2, port=8010 00:15:40.074 [2024-09-29 00:27:55.456283] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:40.074 [2024-09-29 00:27:55.456292] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:40.074 [2024-09-29 00:27:55.456301] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:40.659 [2024-09-29 00:27:56.455995] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.659 [2024-09-29 00:27:56.456108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.659 [2024-09-29 00:27:56.456149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:40.659 [2024-09-29 00:27:56.456165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8c270 with addr=10.0.0.2, port=8010 00:15:40.659 [2024-09-29 00:27:56.456182] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:40.659 [2024-09-29 00:27:56.456193] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:40.659 [2024-09-29 00:27:56.456202] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:42.036 [2024-09-29 00:27:57.455878] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:42.036 request: 00:15:42.036 { 00:15:42.036 "name": "nvme_second", 00:15:42.036 "trtype": "tcp", 00:15:42.036 "traddr": "10.0.0.2", 00:15:42.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:42.036 "adrfam": "ipv4", 00:15:42.036 "trsvcid": "8010", 00:15:42.036 "attach_timeout_ms": 3000, 00:15:42.036 "method": "bdev_nvme_start_discovery", 00:15:42.036 "req_id": 1 00:15:42.036 } 00:15:42.036 Got JSON-RPC error response 00:15:42.036 response: 00:15:42.036 { 00:15:42.036 "code": -110, 00:15:42.036 "message": "Connection timed out" 00:15:42.036 } 00:15:42.036 00:27:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:42.036 00:27:57 -- common/autotest_common.sh@643 -- # es=1 00:15:42.036 00:27:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:42.036 00:27:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:42.036 00:27:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:42.036 00:27:57 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:15:42.036 00:27:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:42.036 00:27:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:42.036 00:27:57 -- host/discovery.sh@67 -- # sort 00:15:42.036 00:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.036 00:27:57 -- host/discovery.sh@67 -- # xargs 00:15:42.036 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:15:42.036 00:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.036 00:27:57 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:15:42.036 00:27:57 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:15:42.036 00:27:57 -- host/discovery.sh@162 -- # kill 70316 00:15:42.036 00:27:57 -- host/discovery.sh@163 -- # nvmftestfini 00:15:42.036 00:27:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:42.036 00:27:57 -- nvmf/common.sh@116 -- # sync 00:15:42.036 00:27:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:42.036 00:27:57 -- nvmf/common.sh@119 -- # set +e 00:15:42.036 00:27:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:42.036 00:27:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:42.036 rmmod nvme_tcp 00:15:42.036 rmmod nvme_fabrics 00:15:42.036 rmmod nvme_keyring 00:15:42.036 00:27:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:42.036 00:27:57 -- nvmf/common.sh@123 -- # set -e 00:15:42.036 00:27:57 -- nvmf/common.sh@124 -- # return 0 00:15:42.036 00:27:57 -- nvmf/common.sh@477 -- # '[' -n 70278 ']' 00:15:42.036 00:27:57 -- nvmf/common.sh@478 -- # killprocess 70278 00:15:42.036 00:27:57 -- common/autotest_common.sh@926 -- # '[' -z 70278 ']' 00:15:42.036 00:27:57 -- common/autotest_common.sh@930 -- # kill -0 70278 00:15:42.036 00:27:57 -- common/autotest_common.sh@931 -- # uname 00:15:42.036 00:27:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:42.036 00:27:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70278 00:15:42.036 00:27:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:42.036 killing process with pid 70278 00:15:42.036 00:27:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:42.036 00:27:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70278' 00:15:42.036 00:27:57 -- common/autotest_common.sh@945 -- # kill 70278 00:15:42.036 00:27:57 -- common/autotest_common.sh@950 -- # wait 70278 00:15:42.036 00:27:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:42.036 00:27:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:42.036 00:27:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:42.036 00:27:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.036 00:27:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:42.036 00:27:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.036 00:27:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.036 00:27:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.036 00:27:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:42.036 00:15:42.036 real 0m13.907s 00:15:42.036 user 0m26.769s 00:15:42.036 sys 0m2.230s 00:15:42.036 00:27:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.036 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:15:42.036 ************************************ 00:15:42.036 END TEST nvmf_discovery 00:15:42.036 ************************************ 00:15:42.295 00:27:57 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:42.295 00:27:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:42.295 00:27:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.295 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:15:42.295 ************************************ 00:15:42.295 START TEST nvmf_discovery_remove_ifc 00:15:42.295 ************************************ 00:15:42.295 00:27:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:42.295 * Looking for test storage... 00:15:42.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:42.295 00:27:58 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.295 00:27:58 -- nvmf/common.sh@7 -- # uname -s 00:15:42.295 00:27:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.295 00:27:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.295 00:27:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.295 00:27:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.295 00:27:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.295 00:27:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.295 00:27:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.295 00:27:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.295 00:27:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.295 00:27:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.295 00:27:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:15:42.295 00:27:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:15:42.295 00:27:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.295 00:27:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.295 00:27:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.295 00:27:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.295 00:27:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.295 00:27:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.295 00:27:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.296 00:27:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.296 00:27:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.296 00:27:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.296 00:27:58 -- paths/export.sh@5 -- # export PATH 00:15:42.296 00:27:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.296 00:27:58 -- nvmf/common.sh@46 -- # : 0 00:15:42.296 00:27:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:42.296 00:27:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:42.296 00:27:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:42.296 00:27:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.296 00:27:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.296 00:27:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:42.296 00:27:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:42.296 00:27:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:42.296 00:27:58 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:42.296 00:27:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:42.296 00:27:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.296 00:27:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:42.296 00:27:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:42.296 00:27:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:42.296 00:27:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.296 00:27:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.296 00:27:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.296 00:27:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:42.296 00:27:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:42.296 00:27:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:42.296 00:27:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:42.296 00:27:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:42.296 00:27:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:42.296 00:27:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.296 00:27:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.296 00:27:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:42.296 00:27:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:42.296 00:27:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.296 00:27:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.296 00:27:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.296 00:27:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.296 00:27:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.296 00:27:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.296 00:27:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.296 00:27:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.296 00:27:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:42.296 00:27:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:42.296 Cannot find device "nvmf_tgt_br" 00:15:42.296 00:27:58 -- nvmf/common.sh@154 -- # true 00:15:42.296 00:27:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.296 Cannot find device "nvmf_tgt_br2" 00:15:42.296 00:27:58 -- nvmf/common.sh@155 -- # true 00:15:42.296 00:27:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:42.296 00:27:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:42.296 Cannot find device "nvmf_tgt_br" 00:15:42.296 00:27:58 -- nvmf/common.sh@157 -- # true 00:15:42.296 00:27:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:42.296 Cannot find device "nvmf_tgt_br2" 00:15:42.296 00:27:58 -- nvmf/common.sh@158 -- # true 00:15:42.296 00:27:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:42.555 00:27:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:42.555 00:27:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.555 00:27:58 -- nvmf/common.sh@161 -- # true 00:15:42.555 00:27:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.555 00:27:58 -- nvmf/common.sh@162 -- # true 00:15:42.555 00:27:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.555 00:27:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.555 00:27:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.555 00:27:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.555 00:27:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.555 00:27:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.555 00:27:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.555 00:27:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:42.555 00:27:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:42.555 00:27:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:42.555 00:27:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:42.555 00:27:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:42.555 00:27:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:42.555 00:27:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.555 00:27:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.555 00:27:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.555 00:27:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:42.555 00:27:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:42.555 00:27:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.555 00:27:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.555 00:27:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.555 00:27:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.555 00:27:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.555 00:27:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:42.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:15:42.555 00:15:42.555 --- 10.0.0.2 ping statistics --- 00:15:42.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.555 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:42.555 00:27:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:42.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:42.555 00:15:42.555 --- 10.0.0.3 ping statistics --- 00:15:42.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.555 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:42.555 00:27:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:42.555 00:15:42.555 --- 10.0.0.1 ping statistics --- 00:15:42.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.555 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:42.555 00:27:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.555 00:27:58 -- nvmf/common.sh@421 -- # return 0 00:15:42.555 00:27:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.555 00:27:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.555 00:27:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.555 00:27:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.555 00:27:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.555 00:27:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.555 00:27:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.815 00:27:58 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:42.815 00:27:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.815 00:27:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:42.815 00:27:58 -- common/autotest_common.sh@10 -- # set +x 00:15:42.815 00:27:58 -- nvmf/common.sh@469 -- # nvmfpid=70804 00:15:42.815 00:27:58 -- nvmf/common.sh@470 -- # waitforlisten 70804 00:15:42.815 00:27:58 -- common/autotest_common.sh@819 -- # '[' -z 70804 ']' 00:15:42.815 00:27:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:42.815 00:27:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.815 00:27:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.815 00:27:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.815 00:27:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.815 00:27:58 -- common/autotest_common.sh@10 -- # set +x 00:15:42.815 [2024-09-29 00:27:58.465484] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:42.815 [2024-09-29 00:27:58.465604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.815 [2024-09-29 00:27:58.605700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.815 [2024-09-29 00:27:58.661292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.815 [2024-09-29 00:27:58.661445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.815 [2024-09-29 00:27:58.661461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.815 [2024-09-29 00:27:58.661470] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.815 [2024-09-29 00:27:58.661495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.755 00:27:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.755 00:27:59 -- common/autotest_common.sh@852 -- # return 0 00:15:43.755 00:27:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.755 00:27:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:43.755 00:27:59 -- common/autotest_common.sh@10 -- # set +x 00:15:43.755 00:27:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.755 00:27:59 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:43.755 00:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.755 00:27:59 -- common/autotest_common.sh@10 -- # set +x 00:15:43.755 [2024-09-29 00:27:59.527587] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.755 [2024-09-29 00:27:59.535769] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:43.755 null0 00:15:43.755 [2024-09-29 00:27:59.567704] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.755 00:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.755 00:27:59 -- host/discovery_remove_ifc.sh@59 -- # hostpid=70836 00:15:43.755 00:27:59 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:43.755 00:27:59 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 70836 /tmp/host.sock 00:15:43.755 00:27:59 -- common/autotest_common.sh@819 -- # '[' -z 70836 ']' 00:15:43.755 00:27:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:15:43.755 00:27:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:43.755 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:43.755 00:27:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:43.755 00:27:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:43.755 00:27:59 -- common/autotest_common.sh@10 -- # set +x 00:15:44.015 [2024-09-29 00:27:59.637225] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:44.015 [2024-09-29 00:27:59.637310] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70836 ] 00:15:44.015 [2024-09-29 00:27:59.768756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.015 [2024-09-29 00:27:59.825863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.015 [2024-09-29 00:27:59.826020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.950 00:28:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:44.950 00:28:00 -- common/autotest_common.sh@852 -- # return 0 00:15:44.950 00:28:00 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.950 00:28:00 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:44.950 00:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.950 00:28:00 -- common/autotest_common.sh@10 -- # set +x 00:15:44.950 00:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:44.950 00:28:00 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:44.950 00:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.950 00:28:00 -- common/autotest_common.sh@10 -- # set +x 00:15:44.950 00:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:44.950 00:28:00 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:44.950 00:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:44.950 00:28:00 -- common/autotest_common.sh@10 -- # set +x 00:15:45.884 [2024-09-29 00:28:01.660796] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:45.884 [2024-09-29 00:28:01.660867] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:45.884 [2024-09-29 00:28:01.660888] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:45.884 [2024-09-29 00:28:01.666919] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:45.884 [2024-09-29 00:28:01.722925] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:45.884 [2024-09-29 00:28:01.722991] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:45.885 [2024-09-29 00:28:01.723017] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:45.885 [2024-09-29 00:28:01.723033] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:45.885 [2024-09-29 00:28:01.723072] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:45.885 00:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:45.885 00:28:01 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:45.885 00:28:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:45.885 [2024-09-29 00:28:01.729385] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x123fbe0 was disconnected and freed. delete nvme_qpair. 00:15:45.885 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.885 00:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:45.885 00:28:01 -- common/autotest_common.sh@10 -- # set +x 00:15:45.885 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:45.885 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:45.885 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:46.143 00:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.143 00:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:46.143 00:28:01 -- common/autotest_common.sh@10 -- # set +x 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:46.143 00:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:46.143 00:28:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:47.078 00:28:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:47.078 00:28:02 -- common/autotest_common.sh@10 -- # set +x 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:47.078 00:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:47.078 00:28:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.454 00:28:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:48.454 00:28:03 -- common/autotest_common.sh@10 -- # set +x 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:48.454 00:28:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:48.454 00:28:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:49.391 00:28:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:49.391 00:28:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:49.391 00:28:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:49.391 00:28:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.391 00:28:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:49.391 00:28:04 -- common/autotest_common.sh@10 -- # set +x 00:15:49.391 00:28:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:49.391 00:28:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.391 00:28:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:49.391 00:28:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.328 00:28:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.328 00:28:06 -- common/autotest_common.sh@10 -- # set +x 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.328 00:28:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:50.328 00:28:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.264 00:28:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:51.264 00:28:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.264 00:28:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.264 00:28:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:51.264 00:28:07 -- common/autotest_common.sh@10 -- # set +x 00:15:51.264 00:28:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:51.264 00:28:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:51.523 00:28:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.523 [2024-09-29 00:28:07.151556] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:51.523 [2024-09-29 00:28:07.151652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.523 [2024-09-29 00:28:07.151668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.523 [2024-09-29 00:28:07.151680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.523 [2024-09-29 00:28:07.151689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.523 [2024-09-29 00:28:07.151698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.523 [2024-09-29 00:28:07.151706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.523 [2024-09-29 00:28:07.151715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.523 [2024-09-29 00:28:07.151723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.523 [2024-09-29 00:28:07.151749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.523 [2024-09-29 00:28:07.151774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.523 [2024-09-29 00:28:07.151784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4de0 is same with the state(5) to be set 00:15:51.523 00:28:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:51.523 00:28:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.523 [2024-09-29 00:28:07.161551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b4de0 (9): Bad file descriptor 00:15:51.523 [2024-09-29 00:28:07.171571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:52.460 00:28:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.460 00:28:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.460 00:28:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.460 00:28:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:52.460 00:28:08 -- common/autotest_common.sh@10 -- # set +x 00:15:52.460 00:28:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.460 00:28:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.460 [2024-09-29 00:28:08.197460] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:53.397 [2024-09-29 00:28:09.221460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:54.772 [2024-09-29 00:28:10.245459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:54.772 [2024-09-29 00:28:10.245596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b4de0 with addr=10.0.0.2, port=4420 00:15:54.772 [2024-09-29 00:28:10.245629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4de0 is same with the state(5) to be set 00:15:54.772 [2024-09-29 00:28:10.245679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:54.772 [2024-09-29 00:28:10.245701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:54.772 [2024-09-29 00:28:10.245719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:54.772 [2024-09-29 00:28:10.245749] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:54.772 [2024-09-29 00:28:10.246527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b4de0 (9): Bad file descriptor 00:15:54.772 [2024-09-29 00:28:10.246620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:54.772 [2024-09-29 00:28:10.246669] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:54.772 [2024-09-29 00:28:10.246734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.772 [2024-09-29 00:28:10.246766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.772 [2024-09-29 00:28:10.246800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.772 [2024-09-29 00:28:10.246824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.772 [2024-09-29 00:28:10.246844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.772 [2024-09-29 00:28:10.246862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.772 [2024-09-29 00:28:10.246881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.772 [2024-09-29 00:28:10.246899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.772 [2024-09-29 00:28:10.246919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:54.772 [2024-09-29 00:28:10.246937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:54.772 [2024-09-29 00:28:10.246955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:54.772 [2024-09-29 00:28:10.247011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b51f0 (9): Bad file descriptor 00:15:54.772 [2024-09-29 00:28:10.248010] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:54.772 [2024-09-29 00:28:10.248075] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:54.772 00:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:54.772 00:28:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:54.772 00:28:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:55.709 00:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.709 00:28:11 -- common/autotest_common.sh@10 -- # set +x 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:55.709 00:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.709 00:28:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.709 00:28:11 -- common/autotest_common.sh@10 -- # set +x 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:55.709 00:28:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:55.709 00:28:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:56.645 [2024-09-29 00:28:12.255816] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:56.646 [2024-09-29 00:28:12.255854] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:56.646 [2024-09-29 00:28:12.255872] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:56.646 [2024-09-29 00:28:12.261848] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:56.646 [2024-09-29 00:28:12.316904] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:56.646 [2024-09-29 00:28:12.317123] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:56.646 [2024-09-29 00:28:12.317159] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:56.646 [2024-09-29 00:28:12.317176] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:56.646 [2024-09-29 00:28:12.317185] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:56.646 [2024-09-29 00:28:12.324469] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11f6ce0 was disconnected and freed. delete nvme_qpair. 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:56.646 00:28:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.646 00:28:12 -- common/autotest_common.sh@10 -- # set +x 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:56.646 00:28:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:56.646 00:28:12 -- host/discovery_remove_ifc.sh@90 -- # killprocess 70836 00:15:56.646 00:28:12 -- common/autotest_common.sh@926 -- # '[' -z 70836 ']' 00:15:56.646 00:28:12 -- common/autotest_common.sh@930 -- # kill -0 70836 00:15:56.646 00:28:12 -- common/autotest_common.sh@931 -- # uname 00:15:56.646 00:28:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:56.646 00:28:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70836 00:15:56.646 killing process with pid 70836 00:15:56.646 00:28:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:56.646 00:28:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:56.646 00:28:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70836' 00:15:56.646 00:28:12 -- common/autotest_common.sh@945 -- # kill 70836 00:15:56.646 00:28:12 -- common/autotest_common.sh@950 -- # wait 70836 00:15:56.905 00:28:12 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:56.905 00:28:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:56.905 00:28:12 -- nvmf/common.sh@116 -- # sync 00:15:56.905 00:28:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:56.905 00:28:12 -- nvmf/common.sh@119 -- # set +e 00:15:56.905 00:28:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:56.905 00:28:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:56.905 rmmod nvme_tcp 00:15:56.905 rmmod nvme_fabrics 00:15:56.905 rmmod nvme_keyring 00:15:57.164 00:28:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:57.164 00:28:12 -- nvmf/common.sh@123 -- # set -e 00:15:57.164 00:28:12 -- nvmf/common.sh@124 -- # return 0 00:15:57.164 00:28:12 -- nvmf/common.sh@477 -- # '[' -n 70804 ']' 00:15:57.164 00:28:12 -- nvmf/common.sh@478 -- # killprocess 70804 00:15:57.164 00:28:12 -- common/autotest_common.sh@926 -- # '[' -z 70804 ']' 00:15:57.164 00:28:12 -- common/autotest_common.sh@930 -- # kill -0 70804 00:15:57.164 00:28:12 -- common/autotest_common.sh@931 -- # uname 00:15:57.164 00:28:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:57.164 00:28:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70804 00:15:57.164 killing process with pid 70804 00:15:57.164 00:28:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:57.164 00:28:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:57.164 00:28:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70804' 00:15:57.164 00:28:12 -- common/autotest_common.sh@945 -- # kill 70804 00:15:57.164 00:28:12 -- common/autotest_common.sh@950 -- # wait 70804 00:15:57.164 00:28:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:57.164 00:28:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:57.164 00:28:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:57.164 00:28:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.164 00:28:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:57.164 00:28:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.164 00:28:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.164 00:28:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.424 00:28:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:57.424 ************************************ 00:15:57.424 END TEST nvmf_discovery_remove_ifc 00:15:57.424 ************************************ 00:15:57.424 00:15:57.424 real 0m15.110s 00:15:57.424 user 0m24.411s 00:15:57.424 sys 0m2.347s 00:15:57.424 00:28:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.424 00:28:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.424 00:28:13 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:15:57.424 00:28:13 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:57.424 00:28:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:57.424 00:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.424 00:28:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.424 ************************************ 00:15:57.424 START TEST nvmf_digest 00:15:57.424 ************************************ 00:15:57.424 00:28:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:57.424 * Looking for test storage... 00:15:57.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:57.424 00:28:13 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.424 00:28:13 -- nvmf/common.sh@7 -- # uname -s 00:15:57.424 00:28:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.424 00:28:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.424 00:28:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.424 00:28:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.424 00:28:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.424 00:28:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.424 00:28:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.424 00:28:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.424 00:28:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.424 00:28:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.424 00:28:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:15:57.424 00:28:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:15:57.424 00:28:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.424 00:28:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.424 00:28:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.424 00:28:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.424 00:28:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.424 00:28:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.424 00:28:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.424 00:28:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.424 00:28:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.424 00:28:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.424 00:28:13 -- paths/export.sh@5 -- # export PATH 00:15:57.424 00:28:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.424 00:28:13 -- nvmf/common.sh@46 -- # : 0 00:15:57.424 00:28:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:57.424 00:28:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:57.424 00:28:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:57.424 00:28:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.424 00:28:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.424 00:28:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:57.424 00:28:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:57.424 00:28:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:57.424 00:28:13 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:57.424 00:28:13 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:57.424 00:28:13 -- host/digest.sh@16 -- # runtime=2 00:15:57.424 00:28:13 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:15:57.424 00:28:13 -- host/digest.sh@132 -- # nvmftestinit 00:15:57.424 00:28:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:57.424 00:28:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.424 00:28:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:57.424 00:28:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:57.424 00:28:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:57.424 00:28:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.424 00:28:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.424 00:28:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.424 00:28:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:57.424 00:28:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:57.424 00:28:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:57.424 00:28:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:57.424 00:28:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:57.424 00:28:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:57.424 00:28:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.424 00:28:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.424 00:28:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:57.424 00:28:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:57.424 00:28:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.424 00:28:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.424 00:28:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.424 00:28:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.424 00:28:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.424 00:28:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.424 00:28:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.424 00:28:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.424 00:28:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:57.424 00:28:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:57.424 Cannot find device "nvmf_tgt_br" 00:15:57.424 00:28:13 -- nvmf/common.sh@154 -- # true 00:15:57.424 00:28:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.424 Cannot find device "nvmf_tgt_br2" 00:15:57.424 00:28:13 -- nvmf/common.sh@155 -- # true 00:15:57.424 00:28:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:57.424 00:28:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:57.424 Cannot find device "nvmf_tgt_br" 00:15:57.424 00:28:13 -- nvmf/common.sh@157 -- # true 00:15:57.424 00:28:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:57.424 Cannot find device "nvmf_tgt_br2" 00:15:57.424 00:28:13 -- nvmf/common.sh@158 -- # true 00:15:57.424 00:28:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:57.738 00:28:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:57.738 00:28:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.738 00:28:13 -- nvmf/common.sh@161 -- # true 00:15:57.738 00:28:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.738 00:28:13 -- nvmf/common.sh@162 -- # true 00:15:57.739 00:28:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.739 00:28:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.739 00:28:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.739 00:28:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.739 00:28:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.739 00:28:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.739 00:28:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.739 00:28:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.739 00:28:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:57.739 00:28:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:57.739 00:28:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:57.739 00:28:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:57.739 00:28:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:57.739 00:28:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.739 00:28:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.739 00:28:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.739 00:28:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:57.739 00:28:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:57.739 00:28:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.739 00:28:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.739 00:28:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.739 00:28:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.739 00:28:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.739 00:28:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:57.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:57.739 00:15:57.739 --- 10.0.0.2 ping statistics --- 00:15:57.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.739 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:57.739 00:28:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:57.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:15:57.739 00:15:57.739 --- 10.0.0.3 ping statistics --- 00:15:57.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.739 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:57.739 00:28:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:57.739 00:15:57.739 --- 10.0.0.1 ping statistics --- 00:15:57.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.739 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:57.739 00:28:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.739 00:28:13 -- nvmf/common.sh@421 -- # return 0 00:15:57.739 00:28:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:57.739 00:28:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.739 00:28:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:57.739 00:28:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:57.739 00:28:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.739 00:28:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:57.739 00:28:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:57.739 00:28:13 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:57.739 00:28:13 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:15:57.739 00:28:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:57.739 00:28:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.739 00:28:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 ************************************ 00:15:57.739 START TEST nvmf_digest_clean 00:15:57.739 ************************************ 00:15:57.739 00:28:13 -- common/autotest_common.sh@1104 -- # run_digest 00:15:57.739 00:28:13 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:15:57.739 00:28:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:57.739 00:28:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:57.739 00:28:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.739 00:28:13 -- nvmf/common.sh@469 -- # nvmfpid=71256 00:15:57.739 00:28:13 -- nvmf/common.sh@470 -- # waitforlisten 71256 00:15:57.739 00:28:13 -- common/autotest_common.sh@819 -- # '[' -z 71256 ']' 00:15:57.739 00:28:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:57.739 00:28:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.739 00:28:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:57.739 00:28:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.739 00:28:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:57.739 00:28:13 -- common/autotest_common.sh@10 -- # set +x 00:15:58.023 [2024-09-29 00:28:13.582198] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:58.023 [2024-09-29 00:28:13.582318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.023 [2024-09-29 00:28:13.720906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.023 [2024-09-29 00:28:13.822836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.023 [2024-09-29 00:28:13.823060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.023 [2024-09-29 00:28:13.823098] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.023 [2024-09-29 00:28:13.823126] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.023 [2024-09-29 00:28:13.823178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.960 00:28:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:58.960 00:28:14 -- common/autotest_common.sh@852 -- # return 0 00:15:58.960 00:28:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:58.960 00:28:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:58.960 00:28:14 -- common/autotest_common.sh@10 -- # set +x 00:15:58.960 00:28:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.960 00:28:14 -- host/digest.sh@120 -- # common_target_config 00:15:58.960 00:28:14 -- host/digest.sh@43 -- # rpc_cmd 00:15:58.960 00:28:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.960 00:28:14 -- common/autotest_common.sh@10 -- # set +x 00:15:58.960 null0 00:15:58.960 [2024-09-29 00:28:14.670129] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.960 [2024-09-29 00:28:14.694291] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.960 00:28:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.960 00:28:14 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:15:58.960 00:28:14 -- host/digest.sh@77 -- # local rw bs qd 00:15:58.960 00:28:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:58.960 00:28:14 -- host/digest.sh@80 -- # rw=randread 00:15:58.960 00:28:14 -- host/digest.sh@80 -- # bs=4096 00:15:58.960 00:28:14 -- host/digest.sh@80 -- # qd=128 00:15:58.960 00:28:14 -- host/digest.sh@82 -- # bperfpid=71288 00:15:58.960 00:28:14 -- host/digest.sh@83 -- # waitforlisten 71288 /var/tmp/bperf.sock 00:15:58.960 00:28:14 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:58.960 00:28:14 -- common/autotest_common.sh@819 -- # '[' -z 71288 ']' 00:15:58.960 00:28:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:58.960 00:28:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.960 00:28:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:58.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:58.960 00:28:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.960 00:28:14 -- common/autotest_common.sh@10 -- # set +x 00:15:58.960 [2024-09-29 00:28:14.754621] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:15:58.960 [2024-09-29 00:28:14.754922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71288 ] 00:15:59.220 [2024-09-29 00:28:14.894057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.220 [2024-09-29 00:28:14.962171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.157 00:28:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:00.157 00:28:15 -- common/autotest_common.sh@852 -- # return 0 00:16:00.157 00:28:15 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:00.157 00:28:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:00.157 00:28:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:00.157 00:28:15 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:00.157 00:28:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:00.725 nvme0n1 00:16:00.725 00:28:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:00.725 00:28:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:00.725 Running I/O for 2 seconds... 00:16:02.630 00:16:02.630 Latency(us) 00:16:02.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:02.630 nvme0n1 : 2.00 16223.26 63.37 0.00 0.00 7884.40 6881.28 24307.90 00:16:02.630 =================================================================================================================== 00:16:02.630 Total : 16223.26 63.37 0.00 0.00 7884.40 6881.28 24307.90 00:16:02.630 0 00:16:02.630 00:28:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:02.630 00:28:18 -- host/digest.sh@92 -- # get_accel_stats 00:16:02.630 00:28:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:02.630 00:28:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:02.630 | select(.opcode=="crc32c") 00:16:02.630 | "\(.module_name) \(.executed)"' 00:16:02.630 00:28:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:03.198 00:28:18 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:03.198 00:28:18 -- host/digest.sh@93 -- # exp_module=software 00:16:03.198 00:28:18 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:03.198 00:28:18 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:03.198 00:28:18 -- host/digest.sh@97 -- # killprocess 71288 00:16:03.198 00:28:18 -- common/autotest_common.sh@926 -- # '[' -z 71288 ']' 00:16:03.198 00:28:18 -- common/autotest_common.sh@930 -- # kill -0 71288 00:16:03.198 00:28:18 -- common/autotest_common.sh@931 -- # uname 00:16:03.198 00:28:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:03.198 00:28:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71288 00:16:03.198 killing process with pid 71288 00:16:03.198 Received shutdown signal, test time was about 2.000000 seconds 00:16:03.198 00:16:03.198 Latency(us) 00:16:03.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.198 =================================================================================================================== 00:16:03.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:03.198 00:28:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:03.198 00:28:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:03.198 00:28:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71288' 00:16:03.198 00:28:18 -- common/autotest_common.sh@945 -- # kill 71288 00:16:03.198 00:28:18 -- common/autotest_common.sh@950 -- # wait 71288 00:16:03.198 00:28:18 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:03.198 00:28:18 -- host/digest.sh@77 -- # local rw bs qd 00:16:03.198 00:28:18 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:03.198 00:28:18 -- host/digest.sh@80 -- # rw=randread 00:16:03.198 00:28:18 -- host/digest.sh@80 -- # bs=131072 00:16:03.198 00:28:18 -- host/digest.sh@80 -- # qd=16 00:16:03.198 00:28:18 -- host/digest.sh@82 -- # bperfpid=71348 00:16:03.198 00:28:18 -- host/digest.sh@83 -- # waitforlisten 71348 /var/tmp/bperf.sock 00:16:03.198 00:28:18 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:03.198 00:28:18 -- common/autotest_common.sh@819 -- # '[' -z 71348 ']' 00:16:03.198 00:28:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:03.198 00:28:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.198 00:28:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:03.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:03.198 00:28:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.198 00:28:18 -- common/autotest_common.sh@10 -- # set +x 00:16:03.198 [2024-09-29 00:28:19.006292] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:03.198 [2024-09-29 00:28:19.006666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71348 ] 00:16:03.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:03.198 Zero copy mechanism will not be used. 00:16:03.457 [2024-09-29 00:28:19.140529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.457 [2024-09-29 00:28:19.192897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.394 00:28:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.394 00:28:19 -- common/autotest_common.sh@852 -- # return 0 00:16:04.394 00:28:19 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:04.394 00:28:19 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:04.394 00:28:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:04.394 00:28:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:04.394 00:28:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:04.653 nvme0n1 00:16:04.912 00:28:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:04.912 00:28:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:04.912 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:04.912 Zero copy mechanism will not be used. 00:16:04.912 Running I/O for 2 seconds... 00:16:06.828 00:16:06.828 Latency(us) 00:16:06.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.828 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:06.828 nvme0n1 : 2.00 8161.15 1020.14 0.00 0.00 1957.72 1690.53 4289.63 00:16:06.828 =================================================================================================================== 00:16:06.828 Total : 8161.15 1020.14 0.00 0.00 1957.72 1690.53 4289.63 00:16:06.828 0 00:16:06.828 00:28:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:06.828 00:28:22 -- host/digest.sh@92 -- # get_accel_stats 00:16:06.828 00:28:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:06.828 00:28:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:06.828 00:28:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:06.828 | select(.opcode=="crc32c") 00:16:06.828 | "\(.module_name) \(.executed)"' 00:16:07.106 00:28:22 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:07.106 00:28:22 -- host/digest.sh@93 -- # exp_module=software 00:16:07.106 00:28:22 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:07.106 00:28:22 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:07.106 00:28:22 -- host/digest.sh@97 -- # killprocess 71348 00:16:07.106 00:28:22 -- common/autotest_common.sh@926 -- # '[' -z 71348 ']' 00:16:07.106 00:28:22 -- common/autotest_common.sh@930 -- # kill -0 71348 00:16:07.106 00:28:22 -- common/autotest_common.sh@931 -- # uname 00:16:07.106 00:28:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:07.106 00:28:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71348 00:16:07.365 00:28:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:07.365 00:28:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:07.365 00:28:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71348' 00:16:07.365 killing process with pid 71348 00:16:07.365 Received shutdown signal, test time was about 2.000000 seconds 00:16:07.365 00:16:07.365 Latency(us) 00:16:07.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.365 =================================================================================================================== 00:16:07.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:07.365 00:28:22 -- common/autotest_common.sh@945 -- # kill 71348 00:16:07.365 00:28:22 -- common/autotest_common.sh@950 -- # wait 71348 00:16:07.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:07.365 00:28:23 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:07.365 00:28:23 -- host/digest.sh@77 -- # local rw bs qd 00:16:07.365 00:28:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:07.365 00:28:23 -- host/digest.sh@80 -- # rw=randwrite 00:16:07.365 00:28:23 -- host/digest.sh@80 -- # bs=4096 00:16:07.365 00:28:23 -- host/digest.sh@80 -- # qd=128 00:16:07.365 00:28:23 -- host/digest.sh@82 -- # bperfpid=71408 00:16:07.365 00:28:23 -- host/digest.sh@83 -- # waitforlisten 71408 /var/tmp/bperf.sock 00:16:07.365 00:28:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:07.365 00:28:23 -- common/autotest_common.sh@819 -- # '[' -z 71408 ']' 00:16:07.365 00:28:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:07.365 00:28:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:07.365 00:28:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:07.365 00:28:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:07.365 00:28:23 -- common/autotest_common.sh@10 -- # set +x 00:16:07.624 [2024-09-29 00:28:23.222574] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:07.624 [2024-09-29 00:28:23.222963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71408 ] 00:16:07.624 [2024-09-29 00:28:23.365331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.624 [2024-09-29 00:28:23.419818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.560 00:28:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:08.560 00:28:24 -- common/autotest_common.sh@852 -- # return 0 00:16:08.560 00:28:24 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:08.560 00:28:24 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:08.560 00:28:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:08.818 00:28:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:08.818 00:28:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:09.078 nvme0n1 00:16:09.078 00:28:24 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:09.078 00:28:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:09.338 Running I/O for 2 seconds... 00:16:11.240 00:16:11.240 Latency(us) 00:16:11.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.240 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.240 nvme0n1 : 2.01 17960.31 70.16 0.00 0.00 7120.65 6464.23 16086.11 00:16:11.240 =================================================================================================================== 00:16:11.240 Total : 17960.31 70.16 0.00 0.00 7120.65 6464.23 16086.11 00:16:11.240 0 00:16:11.240 00:28:26 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:11.240 00:28:26 -- host/digest.sh@92 -- # get_accel_stats 00:16:11.241 00:28:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:11.241 00:28:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:11.241 00:28:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:11.241 | select(.opcode=="crc32c") 00:16:11.241 | "\(.module_name) \(.executed)"' 00:16:11.499 00:28:27 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:11.499 00:28:27 -- host/digest.sh@93 -- # exp_module=software 00:16:11.499 00:28:27 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:11.499 00:28:27 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:11.499 00:28:27 -- host/digest.sh@97 -- # killprocess 71408 00:16:11.499 00:28:27 -- common/autotest_common.sh@926 -- # '[' -z 71408 ']' 00:16:11.499 00:28:27 -- common/autotest_common.sh@930 -- # kill -0 71408 00:16:11.499 00:28:27 -- common/autotest_common.sh@931 -- # uname 00:16:11.499 00:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:11.499 00:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71408 00:16:11.499 killing process with pid 71408 00:16:11.499 Received shutdown signal, test time was about 2.000000 seconds 00:16:11.499 00:16:11.499 Latency(us) 00:16:11.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.499 =================================================================================================================== 00:16:11.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:11.499 00:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:11.499 00:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:11.499 00:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71408' 00:16:11.499 00:28:27 -- common/autotest_common.sh@945 -- # kill 71408 00:16:11.499 00:28:27 -- common/autotest_common.sh@950 -- # wait 71408 00:16:11.758 00:28:27 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:11.758 00:28:27 -- host/digest.sh@77 -- # local rw bs qd 00:16:11.758 00:28:27 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:11.758 00:28:27 -- host/digest.sh@80 -- # rw=randwrite 00:16:11.758 00:28:27 -- host/digest.sh@80 -- # bs=131072 00:16:11.758 00:28:27 -- host/digest.sh@80 -- # qd=16 00:16:11.758 00:28:27 -- host/digest.sh@82 -- # bperfpid=71470 00:16:11.758 00:28:27 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:11.758 00:28:27 -- host/digest.sh@83 -- # waitforlisten 71470 /var/tmp/bperf.sock 00:16:11.758 00:28:27 -- common/autotest_common.sh@819 -- # '[' -z 71470 ']' 00:16:11.758 00:28:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:11.758 00:28:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.758 00:28:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:11.758 00:28:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.758 00:28:27 -- common/autotest_common.sh@10 -- # set +x 00:16:11.758 [2024-09-29 00:28:27.521523] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:11.758 [2024-09-29 00:28:27.521835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71470 ] 00:16:11.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:11.758 Zero copy mechanism will not be used. 00:16:12.016 [2024-09-29 00:28:27.653011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.016 [2024-09-29 00:28:27.706419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.951 00:28:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.951 00:28:28 -- common/autotest_common.sh@852 -- # return 0 00:16:12.951 00:28:28 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:12.951 00:28:28 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:12.951 00:28:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:13.209 00:28:28 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.209 00:28:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:13.468 nvme0n1 00:16:13.468 00:28:29 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:13.468 00:28:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:13.468 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:13.468 Zero copy mechanism will not be used. 00:16:13.468 Running I/O for 2 seconds... 00:16:16.047 00:16:16.047 Latency(us) 00:16:16.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.047 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:16.047 nvme0n1 : 2.00 6958.23 869.78 0.00 0.00 2294.56 1444.77 3783.21 00:16:16.047 =================================================================================================================== 00:16:16.047 Total : 6958.23 869.78 0.00 0.00 2294.56 1444.77 3783.21 00:16:16.047 0 00:16:16.047 00:28:31 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:16.047 00:28:31 -- host/digest.sh@92 -- # get_accel_stats 00:16:16.047 00:28:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:16.047 00:28:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:16.047 00:28:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:16.047 | select(.opcode=="crc32c") 00:16:16.047 | "\(.module_name) \(.executed)"' 00:16:16.047 00:28:31 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:16.047 00:28:31 -- host/digest.sh@93 -- # exp_module=software 00:16:16.047 00:28:31 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:16.047 00:28:31 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:16.047 00:28:31 -- host/digest.sh@97 -- # killprocess 71470 00:16:16.047 00:28:31 -- common/autotest_common.sh@926 -- # '[' -z 71470 ']' 00:16:16.047 00:28:31 -- common/autotest_common.sh@930 -- # kill -0 71470 00:16:16.047 00:28:31 -- common/autotest_common.sh@931 -- # uname 00:16:16.047 00:28:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.047 00:28:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71470 00:16:16.047 killing process with pid 71470 00:16:16.047 Received shutdown signal, test time was about 2.000000 seconds 00:16:16.047 00:16:16.047 Latency(us) 00:16:16.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.047 =================================================================================================================== 00:16:16.047 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.047 00:28:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:16.047 00:28:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:16.047 00:28:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71470' 00:16:16.047 00:28:31 -- common/autotest_common.sh@945 -- # kill 71470 00:16:16.047 00:28:31 -- common/autotest_common.sh@950 -- # wait 71470 00:16:16.047 00:28:31 -- host/digest.sh@126 -- # killprocess 71256 00:16:16.047 00:28:31 -- common/autotest_common.sh@926 -- # '[' -z 71256 ']' 00:16:16.047 00:28:31 -- common/autotest_common.sh@930 -- # kill -0 71256 00:16:16.048 00:28:31 -- common/autotest_common.sh@931 -- # uname 00:16:16.048 00:28:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.048 00:28:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71256 00:16:16.048 killing process with pid 71256 00:16:16.048 00:28:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.048 00:28:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.048 00:28:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71256' 00:16:16.048 00:28:31 -- common/autotest_common.sh@945 -- # kill 71256 00:16:16.048 00:28:31 -- common/autotest_common.sh@950 -- # wait 71256 00:16:16.322 ************************************ 00:16:16.322 END TEST nvmf_digest_clean 00:16:16.322 ************************************ 00:16:16.322 00:16:16.322 real 0m18.457s 00:16:16.322 user 0m36.256s 00:16:16.322 sys 0m4.400s 00:16:16.322 00:28:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.322 00:28:31 -- common/autotest_common.sh@10 -- # set +x 00:16:16.322 00:28:32 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:16.322 00:28:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:16.322 00:28:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.322 00:28:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.322 ************************************ 00:16:16.322 START TEST nvmf_digest_error 00:16:16.322 ************************************ 00:16:16.322 00:28:32 -- common/autotest_common.sh@1104 -- # run_digest_error 00:16:16.322 00:28:32 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:16.322 00:28:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.322 00:28:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:16.322 00:28:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.322 00:28:32 -- nvmf/common.sh@469 -- # nvmfpid=71553 00:16:16.322 00:28:32 -- nvmf/common.sh@470 -- # waitforlisten 71553 00:16:16.322 00:28:32 -- common/autotest_common.sh@819 -- # '[' -z 71553 ']' 00:16:16.322 00:28:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:16.322 00:28:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.322 00:28:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.322 00:28:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.322 00:28:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.322 00:28:32 -- common/autotest_common.sh@10 -- # set +x 00:16:16.322 [2024-09-29 00:28:32.096174] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:16.322 [2024-09-29 00:28:32.096266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.581 [2024-09-29 00:28:32.230716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.581 [2024-09-29 00:28:32.283206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.581 [2024-09-29 00:28:32.283391] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.581 [2024-09-29 00:28:32.283421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.581 [2024-09-29 00:28:32.283432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.581 [2024-09-29 00:28:32.283476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.519 00:28:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.519 00:28:33 -- common/autotest_common.sh@852 -- # return 0 00:16:17.519 00:28:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.519 00:28:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.519 00:28:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.519 00:28:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.519 00:28:33 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:17.519 00:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.519 00:28:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.519 [2024-09-29 00:28:33.051980] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:17.519 00:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.519 00:28:33 -- host/digest.sh@104 -- # common_target_config 00:16:17.519 00:28:33 -- host/digest.sh@43 -- # rpc_cmd 00:16:17.519 00:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.519 00:28:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.519 null0 00:16:17.519 [2024-09-29 00:28:33.119252] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.519 [2024-09-29 00:28:33.143371] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.519 00:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.519 00:28:33 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:17.519 00:28:33 -- host/digest.sh@54 -- # local rw bs qd 00:16:17.519 00:28:33 -- host/digest.sh@56 -- # rw=randread 00:16:17.519 00:28:33 -- host/digest.sh@56 -- # bs=4096 00:16:17.519 00:28:33 -- host/digest.sh@56 -- # qd=128 00:16:17.519 00:28:33 -- host/digest.sh@58 -- # bperfpid=71585 00:16:17.519 00:28:33 -- host/digest.sh@60 -- # waitforlisten 71585 /var/tmp/bperf.sock 00:16:17.519 00:28:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:17.519 00:28:33 -- common/autotest_common.sh@819 -- # '[' -z 71585 ']' 00:16:17.519 00:28:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:17.519 00:28:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.519 00:28:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:17.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:17.519 00:28:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.519 00:28:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.519 [2024-09-29 00:28:33.195958] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:17.519 [2024-09-29 00:28:33.196282] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71585 ] 00:16:17.519 [2024-09-29 00:28:33.335150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.778 [2024-09-29 00:28:33.403387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.346 00:28:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.346 00:28:34 -- common/autotest_common.sh@852 -- # return 0 00:16:18.346 00:28:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:18.346 00:28:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:18.605 00:28:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:18.605 00:28:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.605 00:28:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.605 00:28:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.605 00:28:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.605 00:28:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:18.862 nvme0n1 00:16:18.862 00:28:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:18.862 00:28:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.862 00:28:34 -- common/autotest_common.sh@10 -- # set +x 00:16:19.120 00:28:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.120 00:28:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:19.120 00:28:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:19.120 Running I/O for 2 seconds... 00:16:19.120 [2024-09-29 00:28:34.831147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.831233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.831248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.845868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.845904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.845932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.860223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.860257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.860284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.874690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.874724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.874751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.889068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.889271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.889304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.903732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.903940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.904084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.918634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.918840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.918984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.933514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.933713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.933846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.948167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.948409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.948551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.120 [2024-09-29 00:28:34.963271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.120 [2024-09-29 00:28:34.963483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.120 [2024-09-29 00:28:34.963619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:34.979427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:34.979627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:34.979759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:34.994441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:34.994642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:34.994792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:35.009426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:35.009626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:35.009760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:35.024405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:35.024613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:35.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:35.039420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:35.039725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:35.039744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:35.054276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:35.054311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:35.054339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:35.069123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.378 [2024-09-29 00:28:35.069321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.378 [2024-09-29 00:28:35.069381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.378 [2024-09-29 00:28:35.084830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.084864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.084891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.099350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.099391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.099418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.113827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.113861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.113888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.128261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.128294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.128344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.142810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.142855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.142883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.157330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.157387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.157417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.171810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.171998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.172030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.186606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.186788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.186821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.201303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.201363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.201393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.379 [2024-09-29 00:28:35.216304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.379 [2024-09-29 00:28:35.216403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.379 [2024-09-29 00:28:35.216417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.637 [2024-09-29 00:28:35.233480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.637 [2024-09-29 00:28:35.233519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.637 [2024-09-29 00:28:35.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.637 [2024-09-29 00:28:35.250333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.637 [2024-09-29 00:28:35.250429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.637 [2024-09-29 00:28:35.250460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.637 [2024-09-29 00:28:35.265863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.637 [2024-09-29 00:28:35.265913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.637 [2024-09-29 00:28:35.265941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.637 [2024-09-29 00:28:35.280475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.637 [2024-09-29 00:28:35.280711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.637 [2024-09-29 00:28:35.280758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.637 [2024-09-29 00:28:35.295208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.637 [2024-09-29 00:28:35.295242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.637 [2024-09-29 00:28:35.295270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.637 [2024-09-29 00:28:35.309875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.637 [2024-09-29 00:28:35.309920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.309949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.324475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.324513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.324527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.339151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.339201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.339229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.353887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.353921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.353948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.368388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.368438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.368466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.384734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.384767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.384794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.400517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.400551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.400579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.415164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.415378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.415396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.429973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.430138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.430170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.445306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.445480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.445512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.460542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.460777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.460809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.638 [2024-09-29 00:28:35.476766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.638 [2024-09-29 00:28:35.476803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.638 [2024-09-29 00:28:35.476832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.494869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.494905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.494934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.512095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.512132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.512159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.530383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.530438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.530454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.548093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.548130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.548159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.565158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.565403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.565421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.582929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.582970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.582999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.600099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.600139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.895 [2024-09-29 00:28:35.600167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.895 [2024-09-29 00:28:35.617128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.895 [2024-09-29 00:28:35.617292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.617311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.633833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.633873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.633902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.650487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.650529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.650560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.667100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.667136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.667164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.683493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.683533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.683548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.699559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.699594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.699622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.714738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.714772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.714800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.896 [2024-09-29 00:28:35.730114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:19.896 [2024-09-29 00:28:35.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:19.896 [2024-09-29 00:28:35.730174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.746364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.746410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.746439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.761623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.761656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.761685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.776491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.776675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.776722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.791517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.791567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.791596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.812314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.812381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.812410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.826603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.826775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.826807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.841277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.841312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.841339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.855816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.855848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.855876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.870145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.870179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.870206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.884667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.884870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.884902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.899247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.899309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.913842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.913891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.913920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.928184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.928218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.928246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.942581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.942752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.942783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.957153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.957317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.957361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.971673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.971854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.971887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.154 [2024-09-29 00:28:35.986276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.154 [2024-09-29 00:28:35.986310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.154 [2024-09-29 00:28:35.986338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.002229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.002529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.002550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.017711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.017745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.017773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.031981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.032013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.032041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.046560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.046592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.046620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.061371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.061440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.061469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.075730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.075764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.075792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.090059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.090092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.090118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.104418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.104598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.104630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.119274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.119308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.119335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.133769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.133802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.133829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.148072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.148106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.148134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.413 [2024-09-29 00:28:36.162974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.413 [2024-09-29 00:28:36.163031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.413 [2024-09-29 00:28:36.163060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.414 [2024-09-29 00:28:36.177519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.414 [2024-09-29 00:28:36.177551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.414 [2024-09-29 00:28:36.177578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.414 [2024-09-29 00:28:36.191738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.414 [2024-09-29 00:28:36.191772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.414 [2024-09-29 00:28:36.191799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.414 [2024-09-29 00:28:36.206049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.414 [2024-09-29 00:28:36.206082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.414 [2024-09-29 00:28:36.206109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.414 [2024-09-29 00:28:36.220227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.414 [2024-09-29 00:28:36.220259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.414 [2024-09-29 00:28:36.220286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.414 [2024-09-29 00:28:36.234602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.414 [2024-09-29 00:28:36.234635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.414 [2024-09-29 00:28:36.234662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.414 [2024-09-29 00:28:36.250179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.414 [2024-09-29 00:28:36.250212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.414 [2024-09-29 00:28:36.250240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.268111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.268146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.268174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.284178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.284212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.284240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.299377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.299409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.299437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.314264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.314298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.314325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.329298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.329499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.344562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.344764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.344796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.360512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.672 [2024-09-29 00:28:36.360548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.672 [2024-09-29 00:28:36.360578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.672 [2024-09-29 00:28:36.375479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.375629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.375660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.390545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.390695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.405613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.405762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.405794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.420775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.420924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.420955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.435713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.435881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.436048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.451066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.451235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.451415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.466565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.466752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.466935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.481870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.482039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.482185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.497263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.497440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.497473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.673 [2024-09-29 00:28:36.512533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.673 [2024-09-29 00:28:36.512573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.673 [2024-09-29 00:28:36.512587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.530219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.530405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.530438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.546700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.546764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.546793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.563795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.563829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.563856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.580005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.580041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.580068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.595184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.595218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.595246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.610452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.610486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.610513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.625496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.625528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.625555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.640521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.640729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.640760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.655701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.655911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.655943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.670917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.671118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.671279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.686474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.686798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.703616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.703870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.704022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.720813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.720989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.721146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.738804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.738998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.756218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.756438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.756606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:20.932 [2024-09-29 00:28:36.773908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:20.932 [2024-09-29 00:28:36.774142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:20.932 [2024-09-29 00:28:36.774268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.191 [2024-09-29 00:28:36.799751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2382d40) 00:16:21.191 [2024-09-29 00:28:36.799948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.191 [2024-09-29 00:28:36.800087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.191 00:16:21.191 Latency(us) 00:16:21.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.191 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:21.191 nvme0n1 : 2.00 16418.70 64.14 0.00 0.00 7791.21 6911.07 27644.28 00:16:21.191 =================================================================================================================== 00:16:21.191 Total : 16418.70 64.14 0.00 0.00 7791.21 6911.07 27644.28 00:16:21.191 0 00:16:21.191 00:28:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:21.191 00:28:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:21.191 | .driver_specific 00:16:21.191 | .nvme_error 00:16:21.191 | .status_code 00:16:21.191 | .command_transient_transport_error' 00:16:21.191 00:28:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:21.191 00:28:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:21.449 00:28:37 -- host/digest.sh@71 -- # (( 128 > 0 )) 00:16:21.449 00:28:37 -- host/digest.sh@73 -- # killprocess 71585 00:16:21.449 00:28:37 -- common/autotest_common.sh@926 -- # '[' -z 71585 ']' 00:16:21.449 00:28:37 -- common/autotest_common.sh@930 -- # kill -0 71585 00:16:21.449 00:28:37 -- common/autotest_common.sh@931 -- # uname 00:16:21.449 00:28:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:21.449 00:28:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71585 00:16:21.449 killing process with pid 71585 00:16:21.449 Received shutdown signal, test time was about 2.000000 seconds 00:16:21.449 00:16:21.449 Latency(us) 00:16:21.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.449 =================================================================================================================== 00:16:21.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.449 00:28:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:21.449 00:28:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:21.449 00:28:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71585' 00:16:21.449 00:28:37 -- common/autotest_common.sh@945 -- # kill 71585 00:16:21.449 00:28:37 -- common/autotest_common.sh@950 -- # wait 71585 00:16:21.708 00:28:37 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:21.708 00:28:37 -- host/digest.sh@54 -- # local rw bs qd 00:16:21.708 00:28:37 -- host/digest.sh@56 -- # rw=randread 00:16:21.708 00:28:37 -- host/digest.sh@56 -- # bs=131072 00:16:21.708 00:28:37 -- host/digest.sh@56 -- # qd=16 00:16:21.708 00:28:37 -- host/digest.sh@58 -- # bperfpid=71645 00:16:21.708 00:28:37 -- host/digest.sh@60 -- # waitforlisten 71645 /var/tmp/bperf.sock 00:16:21.708 00:28:37 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:21.708 00:28:37 -- common/autotest_common.sh@819 -- # '[' -z 71645 ']' 00:16:21.708 00:28:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:21.708 00:28:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:21.708 00:28:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:21.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:21.708 00:28:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:21.708 00:28:37 -- common/autotest_common.sh@10 -- # set +x 00:16:21.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:21.708 Zero copy mechanism will not be used. 00:16:21.708 [2024-09-29 00:28:37.426629] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:21.708 [2024-09-29 00:28:37.426747] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71645 ] 00:16:21.967 [2024-09-29 00:28:37.565503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.967 [2024-09-29 00:28:37.621512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.903 00:28:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:22.903 00:28:38 -- common/autotest_common.sh@852 -- # return 0 00:16:22.903 00:28:38 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:22.903 00:28:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:22.903 00:28:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:22.903 00:28:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.903 00:28:38 -- common/autotest_common.sh@10 -- # set +x 00:16:22.903 00:28:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.903 00:28:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:22.903 00:28:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.162 nvme0n1 00:16:23.162 00:28:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:23.162 00:28:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.162 00:28:38 -- common/autotest_common.sh@10 -- # set +x 00:16:23.162 00:28:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.162 00:28:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:23.162 00:28:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:23.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:23.421 Zero copy mechanism will not be used. 00:16:23.421 Running I/O for 2 seconds... 00:16:23.421 [2024-09-29 00:28:39.094399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.094497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.099031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.099087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.099117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.103481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.103523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.103536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.108019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.108088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.108117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.112565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.112621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.112635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.116982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.117034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.117063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.121272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.121325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.121382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.125415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.125466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.421 [2024-09-29 00:28:39.125495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.421 [2024-09-29 00:28:39.129844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.421 [2024-09-29 00:28:39.129901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.129915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.134065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.134116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.134144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.138472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.138523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.138552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.142947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.142999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.143027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.147066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.147118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.147145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.151182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.151234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.151263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.155570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.155620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.155648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.159670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.159720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.159748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.163717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.163798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.163828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.168036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.168089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.168117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.172140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.172190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.172218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.176164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.176214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.176242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.180279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.180325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.180352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.184295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.184382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.184397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.188251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.188301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.188368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.192309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.192386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.192401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.196468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.196507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.196520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.200477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.200514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.200528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.204563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.204602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.204616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.208735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.208784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.208811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.212774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.212821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.212849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.216798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.216847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.216875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.220811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.220860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.220888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.224884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.224933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.224961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.228983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.229031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.229059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.233030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.233079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.233107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.237127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.237176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.237204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.241086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.241136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.241164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.245502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.245553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.245598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.249918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.249968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.249995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.254123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.254173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.254200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.258464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.258513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.258541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.262697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.422 [2024-09-29 00:28:39.262747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.422 [2024-09-29 00:28:39.262774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.422 [2024-09-29 00:28:39.267074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.423 [2024-09-29 00:28:39.267126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.423 [2024-09-29 00:28:39.267169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.271473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.271508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.271536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.275758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.275793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.275821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.279920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.279954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.279983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.283913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.283949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.283977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.287875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.287911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.287940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.291870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.291905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.291933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.295985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.296020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.296049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.300142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.300177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.300205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.304312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.304386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.304401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.308681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.308731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.308759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.313005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.313205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.313254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.317777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.317813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.317842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.322397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.322446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.322476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.326775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.326810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.326838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.331011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.331046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.331074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.335333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.335429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.335445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.339598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.339662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.339691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.343850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.343912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.347841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.347875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.347904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.351841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.351876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.351903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.355703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.355752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.355781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.359654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.359689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.683 [2024-09-29 00:28:39.359716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.683 [2024-09-29 00:28:39.363899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.683 [2024-09-29 00:28:39.363950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.363978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.368447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.368486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.368500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.372617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.372684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.372726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.376525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.376563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.376593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.380537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.380573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.380602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.384428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.384464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.384494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.388448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.388487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.388501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.392394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.392430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.392459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.396444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.396479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.396508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.400395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.400432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.400462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.404417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.404454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.404467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.409544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.409576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.409604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.414408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.414441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.414469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.418373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.418407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.418435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.422685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.422720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.422748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.427191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.427240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.427268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.432117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.432152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.432179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.436449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.436486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.436515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.440517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.440554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.440583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.444561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.444600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.444614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.448496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.448533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.448563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.452642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.452682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.452697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.456569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.456608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.456637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.460584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.460622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.460652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.464568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.464606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.464651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.468565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.468603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.468633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.472584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.472622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.472666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.476755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.476789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.684 [2024-09-29 00:28:39.476817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.684 [2024-09-29 00:28:39.481108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.684 [2024-09-29 00:28:39.481143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.481171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.485323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.485385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.485415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.489378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.489423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.489450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.493401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.493433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.493460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.497755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.497790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.497818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.502114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.502150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.502179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.506554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.506635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.506663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.510796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.510830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.510858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.515014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.515049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.515078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.519097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.519131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.519159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.523244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.523278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.523307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.685 [2024-09-29 00:28:39.527663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.685 [2024-09-29 00:28:39.527700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.685 [2024-09-29 00:28:39.527729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.532031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.532066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.532094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.536488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.536529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.536543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.540538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.540578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.540607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.544580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.544619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.544633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.548567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.548606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.548621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.552599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.552636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.552681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.556620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.556690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.556718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.560685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.560748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.560776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.564686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.564735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.564763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.568714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.568763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.568791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.572757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.572791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.572818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.576775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.576809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.576837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.580910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.580945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.580973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.585021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.585057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.585084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.589052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.589115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.593152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.593187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.593214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.597241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.597277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.597305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.601277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.601312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.601340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.605591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.605628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.605657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.609905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.609941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.609969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.614378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.614426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.614442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.948 [2024-09-29 00:28:39.618838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.948 [2024-09-29 00:28:39.618873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.948 [2024-09-29 00:28:39.618901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.623554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.623593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.623608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.628233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.628269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.628314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.632563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.632610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.632625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.636927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.636961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.636989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.641126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.641161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.641189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.645272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.645307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.645335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.649250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.649284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.649312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.653260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.653295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.653323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.657372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.657431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.657445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.661370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.661434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.661463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.665385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.665429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.665458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.669421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.669455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.669484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.673510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.673544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.673572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.677570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.677605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.677633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.681557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.681591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.681619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.685662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.685698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.685726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.689609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.689645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.689673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.693664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.693698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.693727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.697681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.697716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.697745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.701782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.701816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.701844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.705842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.705878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.705906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.709936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.709971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.949 [2024-09-29 00:28:39.709999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.949 [2024-09-29 00:28:39.713964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.949 [2024-09-29 00:28:39.713999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.714028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.718054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.718089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.718119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.722274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.722310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.722338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.726392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.726425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.726454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.730408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.730441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.730469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.734491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.734525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.734554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.738502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.738536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.738565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.742616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.742650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.742679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.746796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.746831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.746859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.751257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.751311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.751341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.755822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.756022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.756055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.760456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.760496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.760511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.764886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.764921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.764950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.768989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.769024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.769051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.773160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.773194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.773237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.777188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.777239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.777267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.781166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.781201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.781245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.785317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.785395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.785425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:23.950 [2024-09-29 00:28:39.789685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:23.950 [2024-09-29 00:28:39.789721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:23.950 [2024-09-29 00:28:39.789749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.221 [2024-09-29 00:28:39.794438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.221 [2024-09-29 00:28:39.794481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.221 [2024-09-29 00:28:39.794496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.221 [2024-09-29 00:28:39.799073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.221 [2024-09-29 00:28:39.799144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.221 [2024-09-29 00:28:39.799161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.221 [2024-09-29 00:28:39.803737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.221 [2024-09-29 00:28:39.803779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.221 [2024-09-29 00:28:39.803794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.221 [2024-09-29 00:28:39.808704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.221 [2024-09-29 00:28:39.808750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.221 [2024-09-29 00:28:39.808780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.221 [2024-09-29 00:28:39.813647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.813685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.813714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.817898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.818079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.818113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.822333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.822377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.822406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.826199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.826234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.826263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.830283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.830318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.830377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.834364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.834407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.834436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.838256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.838291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.838320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.842293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.842357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.842387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.846471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.846506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.846534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.850501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.850536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.850564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.854589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.854624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.854652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.858625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.858659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.858687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.862638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.862673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.862700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.866640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.866674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.866702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.870572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.870607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.870634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.874730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.874765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.874793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.878778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.878813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.878841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.883591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.883632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.883677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.888476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.888516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.888531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.893283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.893344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.893360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.897420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.897456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.897485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.222 [2024-09-29 00:28:39.901451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.222 [2024-09-29 00:28:39.901502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.222 [2024-09-29 00:28:39.901531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.905806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.905869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.909872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.909907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.909936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.914019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.914055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.914082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.918033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.918068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.918096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.922101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.922135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.922163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.926143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.926178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.926206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.930237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.930272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.930300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.934267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.934303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.934331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.938383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.938427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.938455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.942344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.942404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.942432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.946478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.946512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.946541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.950638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.950672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.950700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.954658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.954691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.954720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.958835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.958884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.958912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.963349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.963416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.963432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.967746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.967936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.967969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.972558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.972614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.972644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.976857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.976893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.976921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.981323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.981391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.981407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.985771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.985805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.985833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.990157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.990192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.990221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.223 [2024-09-29 00:28:39.994737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.223 [2024-09-29 00:28:39.994771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.223 [2024-09-29 00:28:39.994800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:39.999188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:39.999260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:39.999274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.003620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.003674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.003689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.008129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.008165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.008194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.012565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.012619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.017033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.017070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.017099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.021703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.021744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.021759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.026358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.026534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.026553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.031041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.031077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.031105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.035764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.035799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.035828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.040112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.040147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.040176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.044754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.044788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.044816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.048893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.048927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.048956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.053163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.053214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.053243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.057500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.057536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.057582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.061769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.061803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.061832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.224 [2024-09-29 00:28:40.066178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.224 [2024-09-29 00:28:40.066247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.224 [2024-09-29 00:28:40.066291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.070718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.070753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.070782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.074969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.075006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.075036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.079147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.079182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.079226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.083159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.083209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.083238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.087071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.087106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.087135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.090998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.091032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.091061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.095140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.095175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.095219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.099293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.099328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.099404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.103904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.103945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.103975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.109192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.109432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.109452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.113858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.113895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.113923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.117931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.117967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.117996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.122027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.122063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.122091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.126131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.126166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.126194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.130334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.130379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.130408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.134504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.134539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.134582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.138869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.138905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.138933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.143624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.143662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.143707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.148024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.148061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.486 [2024-09-29 00:28:40.148089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.486 [2024-09-29 00:28:40.152521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.486 [2024-09-29 00:28:40.152561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.152576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.156977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.157013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.157041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.161475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.161515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.161529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.166186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.166242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.166256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.170937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.170972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.171001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.175424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.175461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.175491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.179642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.179678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.179706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.183716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.183751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.183780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.187740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.187776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.187804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.191849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.191886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.191915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.195908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.195945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.195974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.200012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.200047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.200076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.204354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.204393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.204408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.208804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.208840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.208869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.213217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.213253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.213282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.217591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.217627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.217656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.222011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.222049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.222078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.226245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.226281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.226310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.230521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.230557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.230587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.234969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.235010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.235041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.239188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.239225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.239254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.487 [2024-09-29 00:28:40.243455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.487 [2024-09-29 00:28:40.243508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.487 [2024-09-29 00:28:40.243537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.247897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.247934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.247962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.252116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.252154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.252183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.256273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.256310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.256379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.260610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.260664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.260693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.264792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.264833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.264848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.269002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.269038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.269066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.273266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.273303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.273333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.277362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.277423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.277437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.281578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.281620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.281649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.285515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.285550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.285578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.289730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.289766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.289795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.293723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.293758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.293787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.297865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.297903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.297933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.301970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.302005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.302033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.306048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.306085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.306114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.310573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.310612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.310643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.315007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.315044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.315074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.319266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.319303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.319332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.323896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.323932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.323962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.488 [2024-09-29 00:28:40.328473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.488 [2024-09-29 00:28:40.328523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.488 [2024-09-29 00:28:40.328538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.332952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.332989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.333035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.337583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.337650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.337695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.342176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.342233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.342248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.346684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.346719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.346747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.350960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.350996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.351024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.355469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.355506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.355535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.359924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.359958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.359987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.364542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.364581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.364595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.368892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.368956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.372988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.373022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.373050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.377011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.377045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.377073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.381030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.381065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.381093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.385123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.385158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.385186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.389224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.389259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.393176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.393211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.393239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.397498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.397532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.397560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.401519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.401554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.401582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.405473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.405507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.405536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.409505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.409539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.409568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.413520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.413556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.413585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.751 [2024-09-29 00:28:40.417603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.751 [2024-09-29 00:28:40.417638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.751 [2024-09-29 00:28:40.417666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.421678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.421730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.421773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.426126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.426177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.426206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.430723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.430915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.430949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.434883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.434918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.434947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.438919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.438954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.438982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.442903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.442938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.442966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.446896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.446931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.446958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.450958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.450994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.451023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.455037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.455241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.455404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.459504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.459541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.459570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.463393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.463427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.463454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.467334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.467381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.467394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.471296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.471360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.471390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.475278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.475312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.475340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.479262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.479297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.479341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.483299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.483361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.483390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.487386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.487450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.491393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.491426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.491454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.495403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.495436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.495464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.499419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.499452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.499480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.503372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.503405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.503433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.507349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.507383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.507411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.511422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.511457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.511485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.515435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.515479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.515509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.519526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.519561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.519589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.523406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.752 [2024-09-29 00:28:40.523439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.752 [2024-09-29 00:28:40.523466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.752 [2024-09-29 00:28:40.527415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.527449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.527477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.531363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.531396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.531424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.535374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.535408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.535436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.539415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.539469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.539484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.543733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.543767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.543795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.547999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.548034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.548063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.552108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.552143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.552171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.556298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.556372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.556388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.560554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.560592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.560620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.564509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.564544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.564574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.568612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.568680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.568709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.573711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.573904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.573938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.578290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.578328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.578368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.582316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.582396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.582410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.586946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.586980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.587008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.591837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.591877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.753 [2024-09-29 00:28:40.596516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:24.753 [2024-09-29 00:28:40.596556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.753 [2024-09-29 00:28:40.596570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.600806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.014 [2024-09-29 00:28:40.600841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.014 [2024-09-29 00:28:40.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.605064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.014 [2024-09-29 00:28:40.605101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.014 [2024-09-29 00:28:40.605129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.609232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.014 [2024-09-29 00:28:40.609267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.014 [2024-09-29 00:28:40.609295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.613229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.014 [2024-09-29 00:28:40.613264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.014 [2024-09-29 00:28:40.613293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.617225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.014 [2024-09-29 00:28:40.617259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.014 [2024-09-29 00:28:40.617287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.621275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.014 [2024-09-29 00:28:40.621310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.014 [2024-09-29 00:28:40.621339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.014 [2024-09-29 00:28:40.625249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.625284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.625312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.629467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.629503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.629517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.633544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.633580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.633594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.637803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.637840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.637869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.642206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.642273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.646848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.646883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.646912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.651211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.651247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.651275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.655579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.655615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.655628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.659923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.659957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.659985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.664203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.664238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.664265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.668234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.668269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.668298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.672247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.672281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.672309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.676431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.676469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.676499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.680472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.680509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.680538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.685024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.685061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.685089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.689600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.689635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.689664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.693663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.693698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.693727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.697699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.697734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.697762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.701880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.701916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.701944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.705914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.705949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.705977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.710070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.710106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.710134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.714156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.714207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.714235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.718158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.718208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.718237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.722196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.722231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.722259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.726283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.726319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.726356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.730296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.730360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.730390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.015 [2024-09-29 00:28:40.734496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.015 [2024-09-29 00:28:40.734531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.015 [2024-09-29 00:28:40.734560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.738552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.738601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.738628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.742472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.742506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.742534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.746473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.746507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.746535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.750459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.750493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.750521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.754487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.754521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.754550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.758537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.758585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.758612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.762413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.762448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.762476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.766355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.766388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.766416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.770356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.770389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.770417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.774429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.774462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.774490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.778360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.778403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.778430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.782466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.782499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.782527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.786439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.786473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.786501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.790362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.790396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.790423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.794778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.794817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.794831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.799246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.799285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.799314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.803607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.803642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.803671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.807790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.807825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.807852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.812068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.812105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.812133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.816189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.816224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.816252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.820220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.820255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.820283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.824247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.824282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.824310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.828210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.828246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.828274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.832380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.832417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.836303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.836394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.836408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.840295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.840397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.016 [2024-09-29 00:28:40.840414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.016 [2024-09-29 00:28:40.844299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.016 [2024-09-29 00:28:40.844366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.017 [2024-09-29 00:28:40.844397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.017 [2024-09-29 00:28:40.848361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.017 [2024-09-29 00:28:40.848396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.017 [2024-09-29 00:28:40.848424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.017 [2024-09-29 00:28:40.852413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.017 [2024-09-29 00:28:40.852450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.017 [2024-09-29 00:28:40.852463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.017 [2024-09-29 00:28:40.856413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.017 [2024-09-29 00:28:40.856449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.017 [2024-09-29 00:28:40.856479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.861008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.861054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.861089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.865289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.865325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.865382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.869788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.869823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.869851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.873903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.873938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.873966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.877982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.878017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.878045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.882229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.882264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.882292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.886951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.886987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.887031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.891095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.891130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.891159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.895067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.895102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.895130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.899087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.899121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.899149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.903057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.903093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.903121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.906978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.907012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.907040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.911053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.911087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.911115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.915002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.915037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.915065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.919023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.919058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.278 [2024-09-29 00:28:40.919086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.278 [2024-09-29 00:28:40.923074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.278 [2024-09-29 00:28:40.923109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.923138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.927099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.927133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.927161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.931061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.931096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.931124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.935042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.935077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.935104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.939021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.939055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.939084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.943327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.943409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.943441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.948055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.948094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.948137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.952116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.952151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.952180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.956127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.956162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.956190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.960206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.960241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.960269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.964223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.964257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.964285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.968198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.968232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.968260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.972133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.972168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.972195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.976296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.976386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.976403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.980203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.980237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.980265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.984150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.984184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.984212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.988219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.988254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.988282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.992245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.992279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.992361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:40.996184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:40.996219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:40.996247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.000207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.000241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.000269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.004780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.004967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.005001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.009497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.009550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.009564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.013930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.013965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.013993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.018297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.018362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.022698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.022733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.022761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.026681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.026864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.030925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.279 [2024-09-29 00:28:41.030960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.279 [2024-09-29 00:28:41.030989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.279 [2024-09-29 00:28:41.034952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.034987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.035015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.038955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.038990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.039018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.042953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.042988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.043017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.046989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.047024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.047053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.050930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.050965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.050994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.054950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.054985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.055013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.058964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.059000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.059028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.062936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.062970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.062998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.067009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.067045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.067073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.070979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.071014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.071041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.075000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.075035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.075063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.078966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.079000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.079029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.082947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.082982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.083010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.086923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.086958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.086986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.280 [2024-09-29 00:28:41.090803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x631940) 00:16:25.280 [2024-09-29 00:28:41.090837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.280 [2024-09-29 00:28:41.090865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.280 00:16:25.280 Latency(us) 00:16:25.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:25.280 nvme0n1 : 2.00 7371.43 921.43 0.00 0.00 2167.33 1623.51 5183.30 00:16:25.280 =================================================================================================================== 00:16:25.280 Total : 7371.43 921.43 0.00 0.00 2167.33 1623.51 5183.30 00:16:25.280 0 00:16:25.280 00:28:41 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:25.280 00:28:41 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:25.280 00:28:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:25.280 00:28:41 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:25.280 | .driver_specific 00:16:25.280 | .nvme_error 00:16:25.280 | .status_code 00:16:25.280 | .command_transient_transport_error' 00:16:25.850 00:28:41 -- host/digest.sh@71 -- # (( 476 > 0 )) 00:16:25.850 00:28:41 -- host/digest.sh@73 -- # killprocess 71645 00:16:25.850 00:28:41 -- common/autotest_common.sh@926 -- # '[' -z 71645 ']' 00:16:25.850 00:28:41 -- common/autotest_common.sh@930 -- # kill -0 71645 00:16:25.850 00:28:41 -- common/autotest_common.sh@931 -- # uname 00:16:25.850 00:28:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.850 00:28:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71645 00:16:25.850 killing process with pid 71645 00:16:25.850 Received shutdown signal, test time was about 2.000000 seconds 00:16:25.850 00:16:25.850 Latency(us) 00:16:25.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.850 =================================================================================================================== 00:16:25.850 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.850 00:28:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:25.850 00:28:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:25.850 00:28:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71645' 00:16:25.850 00:28:41 -- common/autotest_common.sh@945 -- # kill 71645 00:16:25.850 00:28:41 -- common/autotest_common.sh@950 -- # wait 71645 00:16:25.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:25.850 00:28:41 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:25.850 00:28:41 -- host/digest.sh@54 -- # local rw bs qd 00:16:25.850 00:28:41 -- host/digest.sh@56 -- # rw=randwrite 00:16:25.850 00:28:41 -- host/digest.sh@56 -- # bs=4096 00:16:25.850 00:28:41 -- host/digest.sh@56 -- # qd=128 00:16:25.850 00:28:41 -- host/digest.sh@58 -- # bperfpid=71706 00:16:25.850 00:28:41 -- host/digest.sh@60 -- # waitforlisten 71706 /var/tmp/bperf.sock 00:16:25.850 00:28:41 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:25.850 00:28:41 -- common/autotest_common.sh@819 -- # '[' -z 71706 ']' 00:16:25.850 00:28:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:25.850 00:28:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:25.850 00:28:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:25.850 00:28:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:25.850 00:28:41 -- common/autotest_common.sh@10 -- # set +x 00:16:25.850 [2024-09-29 00:28:41.657272] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:25.850 [2024-09-29 00:28:41.658124] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71706 ] 00:16:26.109 [2024-09-29 00:28:41.797421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.109 [2024-09-29 00:28:41.849870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.048 00:28:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.048 00:28:42 -- common/autotest_common.sh@852 -- # return 0 00:16:27.048 00:28:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:27.048 00:28:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:27.048 00:28:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:27.048 00:28:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.048 00:28:42 -- common/autotest_common.sh@10 -- # set +x 00:16:27.048 00:28:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.048 00:28:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.048 00:28:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:27.307 nvme0n1 00:16:27.567 00:28:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:27.567 00:28:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.567 00:28:43 -- common/autotest_common.sh@10 -- # set +x 00:16:27.567 00:28:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.567 00:28:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:27.567 00:28:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:27.567 Running I/O for 2 seconds... 00:16:27.567 [2024-09-29 00:28:43.328982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ddc00 00:16:27.567 [2024-09-29 00:28:43.330419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.567 [2024-09-29 00:28:43.330503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:27.567 [2024-09-29 00:28:43.343865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fef90 00:16:27.567 [2024-09-29 00:28:43.345249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.567 [2024-09-29 00:28:43.345297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.567 [2024-09-29 00:28:43.358401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ff3c8 00:16:27.567 [2024-09-29 00:28:43.359762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.567 [2024-09-29 00:28:43.359810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:27.567 [2024-09-29 00:28:43.372826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190feb58 00:16:27.567 [2024-09-29 00:28:43.374156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.567 [2024-09-29 00:28:43.374203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:27.567 [2024-09-29 00:28:43.388298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fe720 00:16:27.567 [2024-09-29 00:28:43.389712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.567 [2024-09-29 00:28:43.389760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:27.567 [2024-09-29 00:28:43.404501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fe2e8 00:16:27.567 [2024-09-29 00:28:43.405880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.567 [2024-09-29 00:28:43.405930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.421406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fdeb0 00:16:27.827 [2024-09-29 00:28:43.422803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.422851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.436103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fda78 00:16:27.827 [2024-09-29 00:28:43.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.437480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.450835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fd640 00:16:27.827 [2024-09-29 00:28:43.452127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.452174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.465458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fd208 00:16:27.827 [2024-09-29 00:28:43.466800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.466847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.479981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fcdd0 00:16:27.827 [2024-09-29 00:28:43.481282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.481354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.495570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fc998 00:16:27.827 [2024-09-29 00:28:43.496913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.496960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:27.827 [2024-09-29 00:28:43.510919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fc560 00:16:27.827 [2024-09-29 00:28:43.512199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.827 [2024-09-29 00:28:43.512245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.525425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fc128 00:16:27.828 [2024-09-29 00:28:43.526719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.526780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.540949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fbcf0 00:16:27.828 [2024-09-29 00:28:43.542360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.542415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.555689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fb8b8 00:16:27.828 [2024-09-29 00:28:43.556983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.557029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.570260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fb480 00:16:27.828 [2024-09-29 00:28:43.571529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.571576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.584794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fb048 00:16:27.828 [2024-09-29 00:28:43.585995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.586041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.599202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fac10 00:16:27.828 [2024-09-29 00:28:43.600501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.600549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.613797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fa7d8 00:16:27.828 [2024-09-29 00:28:43.615011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.615056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.628231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190fa3a0 00:16:27.828 [2024-09-29 00:28:43.629460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.629505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.642768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f9f68 00:16:27.828 [2024-09-29 00:28:43.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.644019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.657368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f9b30 00:16:27.828 [2024-09-29 00:28:43.658581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.658627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:27.828 [2024-09-29 00:28:43.673583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f96f8 00:16:27.828 [2024-09-29 00:28:43.674817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:27.828 [2024-09-29 00:28:43.674885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.690501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f92c0 00:16:28.087 [2024-09-29 00:28:43.691720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.691799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.707561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f8e88 00:16:28.087 [2024-09-29 00:28:43.708812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.708857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.725379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f8a50 00:16:28.087 [2024-09-29 00:28:43.726492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.726543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.740297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f8618 00:16:28.087 [2024-09-29 00:28:43.741436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.741487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.755584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f81e0 00:16:28.087 [2024-09-29 00:28:43.756819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.756869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.771044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f7da8 00:16:28.087 [2024-09-29 00:28:43.772228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.772294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.787565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f7970 00:16:28.087 [2024-09-29 00:28:43.788750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.087 [2024-09-29 00:28:43.788799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:28.087 [2024-09-29 00:28:43.804249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f7538 00:16:28.087 [2024-09-29 00:28:43.805417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.805488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.819681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f7100 00:16:28.088 [2024-09-29 00:28:43.820814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.820863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.835055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f6cc8 00:16:28.088 [2024-09-29 00:28:43.836155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.836204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.850561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f6890 00:16:28.088 [2024-09-29 00:28:43.851663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.851711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.865173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f6458 00:16:28.088 [2024-09-29 00:28:43.866246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.866291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.879836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f6020 00:16:28.088 [2024-09-29 00:28:43.880918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.880965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.894409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f5be8 00:16:28.088 [2024-09-29 00:28:43.895427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.895480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.908806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f57b0 00:16:28.088 [2024-09-29 00:28:43.909850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.909896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:28.088 [2024-09-29 00:28:43.923121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f5378 00:16:28.088 [2024-09-29 00:28:43.924176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.088 [2024-09-29 00:28:43.924223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:43.938584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f4f40 00:16:28.348 [2024-09-29 00:28:43.939678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:43.939727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:43.953264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f4b08 00:16:28.348 [2024-09-29 00:28:43.954285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:43.954356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:43.967705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f46d0 00:16:28.348 [2024-09-29 00:28:43.968770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:43.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:43.985542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f4298 00:16:28.348 [2024-09-29 00:28:43.986817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:43.986865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.002421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f3e60 00:16:28.348 [2024-09-29 00:28:44.003391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.003445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.016940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f3a28 00:16:28.348 [2024-09-29 00:28:44.017911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.017957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.031283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f35f0 00:16:28.348 [2024-09-29 00:28:44.032265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.032311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.045849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f31b8 00:16:28.348 [2024-09-29 00:28:44.046801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.046848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.061597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f2d80 00:16:28.348 [2024-09-29 00:28:44.062567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.062615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.075913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f2948 00:16:28.348 [2024-09-29 00:28:44.076913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.076959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.090384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f2510 00:16:28.348 [2024-09-29 00:28:44.091301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.091369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.105189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f20d8 00:16:28.348 [2024-09-29 00:28:44.106112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.106159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.119515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f1ca0 00:16:28.348 [2024-09-29 00:28:44.120462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.120511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.133815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f1868 00:16:28.348 [2024-09-29 00:28:44.134716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.134763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.148010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f1430 00:16:28.348 [2024-09-29 00:28:44.148946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.148990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.162373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f0ff8 00:16:28.348 [2024-09-29 00:28:44.163208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.163272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.176014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f0bc0 00:16:28.348 [2024-09-29 00:28:44.176917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.176965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:28.348 [2024-09-29 00:28:44.190023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f0788 00:16:28.348 [2024-09-29 00:28:44.190931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.348 [2024-09-29 00:28:44.190980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:28.608 [2024-09-29 00:28:44.205134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190f0350 00:16:28.608 [2024-09-29 00:28:44.205945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.608 [2024-09-29 00:28:44.206007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:28.608 [2024-09-29 00:28:44.219143] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eff18 00:16:28.608 [2024-09-29 00:28:44.219993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.608 [2024-09-29 00:28:44.220041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:28.608 [2024-09-29 00:28:44.233735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190efae0 00:16:28.608 [2024-09-29 00:28:44.234545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.608 [2024-09-29 00:28:44.234606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:28.608 [2024-09-29 00:28:44.248999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ef6a8 00:16:28.608 [2024-09-29 00:28:44.249803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.608 [2024-09-29 00:28:44.249850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:28.608 [2024-09-29 00:28:44.264472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ef270 00:16:28.608 [2024-09-29 00:28:44.265304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.608 [2024-09-29 00:28:44.265376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:28.608 [2024-09-29 00:28:44.279014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eee38 00:16:28.608 [2024-09-29 00:28:44.279815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.279860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.293399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eea00 00:16:28.609 [2024-09-29 00:28:44.294166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.294211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.307771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ee5c8 00:16:28.609 [2024-09-29 00:28:44.308503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.308539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.323292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ee190 00:16:28.609 [2024-09-29 00:28:44.324021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.324067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.337672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190edd58 00:16:28.609 [2024-09-29 00:28:44.338368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.338423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.352105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ed920 00:16:28.609 [2024-09-29 00:28:44.352870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.352916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.366430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ed4e8 00:16:28.609 [2024-09-29 00:28:44.367120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.367167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.380699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ed0b0 00:16:28.609 [2024-09-29 00:28:44.381466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.381521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.394917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ecc78 00:16:28.609 [2024-09-29 00:28:44.395610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.395657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.410198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ec840 00:16:28.609 [2024-09-29 00:28:44.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.411004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.426630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ec408 00:16:28.609 [2024-09-29 00:28:44.427280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.427316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:28.609 [2024-09-29 00:28:44.442268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ebfd0 00:16:28.609 [2024-09-29 00:28:44.442952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.609 [2024-09-29 00:28:44.443015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.457660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ebb98 00:16:28.869 [2024-09-29 00:28:44.458271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.458319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.472405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eb760 00:16:28.869 [2024-09-29 00:28:44.473091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.473155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.486915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eb328 00:16:28.869 [2024-09-29 00:28:44.487515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.487549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.503163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eaef0 00:16:28.869 [2024-09-29 00:28:44.503869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.503931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.518852] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190eaab8 00:16:28.869 [2024-09-29 00:28:44.519469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.519548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.534494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ea680 00:16:28.869 [2024-09-29 00:28:44.535084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.535147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.548661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190ea248 00:16:28.869 [2024-09-29 00:28:44.549241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.549289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.562703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e9e10 00:16:28.869 [2024-09-29 00:28:44.563261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.563301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.577464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e99d8 00:16:28.869 [2024-09-29 00:28:44.578090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.578123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.591532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e95a0 00:16:28.869 [2024-09-29 00:28:44.592053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.592087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.605415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e9168 00:16:28.869 [2024-09-29 00:28:44.605927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.605961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.619297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e8d30 00:16:28.869 [2024-09-29 00:28:44.619853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.619894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.633183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e88f8 00:16:28.869 [2024-09-29 00:28:44.633689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.869 [2024-09-29 00:28:44.633726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:28.869 [2024-09-29 00:28:44.646889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e84c0 00:16:28.869 [2024-09-29 00:28:44.647372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.870 [2024-09-29 00:28:44.647415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:28.870 [2024-09-29 00:28:44.660619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e8088 00:16:28.870 [2024-09-29 00:28:44.661129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.870 [2024-09-29 00:28:44.661164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:28.870 [2024-09-29 00:28:44.674385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e7c50 00:16:28.870 [2024-09-29 00:28:44.674847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.870 [2024-09-29 00:28:44.674882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.870 [2024-09-29 00:28:44.688412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e7818 00:16:28.870 [2024-09-29 00:28:44.688926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.870 [2024-09-29 00:28:44.688962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.870 [2024-09-29 00:28:44.702224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e73e0 00:16:28.870 [2024-09-29 00:28:44.702682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:28.870 [2024-09-29 00:28:44.702718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.717192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e6fa8 00:16:29.129 [2024-09-29 00:28:44.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.717705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.733495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e6b70 00:16:29.129 [2024-09-29 00:28:44.733946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.733982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.750255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e6738 00:16:29.129 [2024-09-29 00:28:44.750765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.750806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.765330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e6300 00:16:29.129 [2024-09-29 00:28:44.765792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.765829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.779338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e5ec8 00:16:29.129 [2024-09-29 00:28:44.779763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.779797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.793412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e5a90 00:16:29.129 [2024-09-29 00:28:44.793828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.793898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:29.129 [2024-09-29 00:28:44.807409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e5658 00:16:29.129 [2024-09-29 00:28:44.807796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.129 [2024-09-29 00:28:44.807834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.821133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e5220 00:16:29.130 [2024-09-29 00:28:44.821532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.821568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.835619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e4de8 00:16:29.130 [2024-09-29 00:28:44.835995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.836029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.852137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e49b0 00:16:29.130 [2024-09-29 00:28:44.852483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.852521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.868992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e4578 00:16:29.130 [2024-09-29 00:28:44.869328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.869375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.885046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e4140 00:16:29.130 [2024-09-29 00:28:44.885415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.885461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.900627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e3d08 00:16:29.130 [2024-09-29 00:28:44.901025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.901054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.915964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e38d0 00:16:29.130 [2024-09-29 00:28:44.916281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.916314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.930869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e3498 00:16:29.130 [2024-09-29 00:28:44.931155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.931202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.945616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e3060 00:16:29.130 [2024-09-29 00:28:44.945894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.945941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:29.130 [2024-09-29 00:28:44.960838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e2c28 00:16:29.130 [2024-09-29 00:28:44.961117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.130 [2024-09-29 00:28:44.961155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:44.978600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e27f0 00:16:29.390 [2024-09-29 00:28:44.978856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:44.978922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:44.994224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e23b8 00:16:29.390 [2024-09-29 00:28:44.994465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:44.994487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.008630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e1f80 00:16:29.390 [2024-09-29 00:28:45.008929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.008952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.022504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e1b48 00:16:29.390 [2024-09-29 00:28:45.022711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.022730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.036095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e1710 00:16:29.390 [2024-09-29 00:28:45.036296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.036314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.049928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e12d8 00:16:29.390 [2024-09-29 00:28:45.050118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.050137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.063781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e0ea0 00:16:29.390 [2024-09-29 00:28:45.063963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.063982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.077381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e0a68 00:16:29.390 [2024-09-29 00:28:45.077556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.077574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.091312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e0630 00:16:29.390 [2024-09-29 00:28:45.091500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.091519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.105069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190e01f8 00:16:29.390 [2024-09-29 00:28:45.105225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.390 [2024-09-29 00:28:45.105244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:29.390 [2024-09-29 00:28:45.119237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190dfdc0 00:16:29.391 [2024-09-29 00:28:45.119432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.119459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.133389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190df988 00:16:29.391 [2024-09-29 00:28:45.133525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.133545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.148112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190df550 00:16:29.391 [2024-09-29 00:28:45.148240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.148260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.162077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190df118 00:16:29.391 [2024-09-29 00:28:45.162197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.162216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.176174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190dece0 00:16:29.391 [2024-09-29 00:28:45.176289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.176308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.190059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190de8a8 00:16:29.391 [2024-09-29 00:28:45.190163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.190182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.203937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190de038 00:16:29.391 [2024-09-29 00:28:45.204029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.204049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.223275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190de038 00:16:29.391 [2024-09-29 00:28:45.224696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.391 [2024-09-29 00:28:45.224761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:29.391 [2024-09-29 00:28:45.237831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190de470 00:16:29.651 [2024-09-29 00:28:45.239322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.651 [2024-09-29 00:28:45.239377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:29.651 [2024-09-29 00:28:45.252533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190de8a8 00:16:29.651 [2024-09-29 00:28:45.253853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.651 [2024-09-29 00:28:45.253899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:29.651 [2024-09-29 00:28:45.266756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190dece0 00:16:29.651 [2024-09-29 00:28:45.268057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.651 [2024-09-29 00:28:45.268101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:29.651 [2024-09-29 00:28:45.280667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190df118 00:16:29.651 [2024-09-29 00:28:45.281988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.651 [2024-09-29 00:28:45.282033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:29.651 [2024-09-29 00:28:45.294647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bdc0) with pdu=0x2000190df550 00:16:29.651 [2024-09-29 00:28:45.295986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:29.651 [2024-09-29 00:28:45.296035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:29.651 00:16:29.651 Latency(us) 00:16:29.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.651 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:29.651 nvme0n1 : 2.00 17059.33 66.64 0.00 0.00 7496.99 6315.29 20733.21 00:16:29.651 =================================================================================================================== 00:16:29.651 Total : 17059.33 66.64 0.00 0.00 7496.99 6315.29 20733.21 00:16:29.651 0 00:16:29.651 00:28:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:29.651 00:28:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:29.651 00:28:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:29.651 | .driver_specific 00:16:29.651 | .nvme_error 00:16:29.651 | .status_code 00:16:29.651 | .command_transient_transport_error' 00:16:29.651 00:28:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:29.911 00:28:45 -- host/digest.sh@71 -- # (( 133 > 0 )) 00:16:29.911 00:28:45 -- host/digest.sh@73 -- # killprocess 71706 00:16:29.911 00:28:45 -- common/autotest_common.sh@926 -- # '[' -z 71706 ']' 00:16:29.911 00:28:45 -- common/autotest_common.sh@930 -- # kill -0 71706 00:16:29.911 00:28:45 -- common/autotest_common.sh@931 -- # uname 00:16:29.911 00:28:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.911 00:28:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71706 00:16:29.911 killing process with pid 71706 00:16:29.911 Received shutdown signal, test time was about 2.000000 seconds 00:16:29.911 00:16:29.911 Latency(us) 00:16:29.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.911 =================================================================================================================== 00:16:29.911 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.911 00:28:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:29.911 00:28:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:29.911 00:28:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71706' 00:16:29.911 00:28:45 -- common/autotest_common.sh@945 -- # kill 71706 00:16:29.911 00:28:45 -- common/autotest_common.sh@950 -- # wait 71706 00:16:30.170 00:28:45 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:30.170 00:28:45 -- host/digest.sh@54 -- # local rw bs qd 00:16:30.170 00:28:45 -- host/digest.sh@56 -- # rw=randwrite 00:16:30.170 00:28:45 -- host/digest.sh@56 -- # bs=131072 00:16:30.170 00:28:45 -- host/digest.sh@56 -- # qd=16 00:16:30.170 00:28:45 -- host/digest.sh@58 -- # bperfpid=71767 00:16:30.170 00:28:45 -- host/digest.sh@60 -- # waitforlisten 71767 /var/tmp/bperf.sock 00:16:30.170 00:28:45 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:30.170 00:28:45 -- common/autotest_common.sh@819 -- # '[' -z 71767 ']' 00:16:30.170 00:28:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:30.170 00:28:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.170 00:28:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:30.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:30.170 00:28:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.170 00:28:45 -- common/autotest_common.sh@10 -- # set +x 00:16:30.170 [2024-09-29 00:28:45.868425] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:30.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:30.170 Zero copy mechanism will not be used. 00:16:30.170 [2024-09-29 00:28:45.868523] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:16:30.170 [2024-09-29 00:28:46.000956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.428 [2024-09-29 00:28:46.058081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.364 00:28:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.364 00:28:46 -- common/autotest_common.sh@852 -- # return 0 00:16:31.364 00:28:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:31.364 00:28:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:31.364 00:28:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:31.364 00:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.364 00:28:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.365 00:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.365 00:28:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.365 00:28:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:31.622 nvme0n1 00:16:31.622 00:28:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:31.622 00:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.622 00:28:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.623 00:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.623 00:28:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:31.623 00:28:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:31.882 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:31.882 Zero copy mechanism will not be used. 00:16:31.882 Running I/O for 2 seconds... 00:16:31.882 [2024-09-29 00:28:47.580528] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.580903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.580933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.585753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.586230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.586433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.591243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.591747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.591906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.596815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.597117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.597145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.601799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.602149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.602188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.606749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.607050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.611700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.612027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.612054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.616590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.616985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.621444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.621732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.621773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.626208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.626553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.626606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.630993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.631287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.631314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.635822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.636175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.636212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.640685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.641004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.641030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.645468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.645779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.645832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.882 [2024-09-29 00:28:47.650296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.882 [2024-09-29 00:28:47.650640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.882 [2024-09-29 00:28:47.650676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.655089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.655388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.655413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.659857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.660200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.660238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.664838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.665116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.665173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.669626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.669905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.669962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.674369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.674683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.674735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.679212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.679617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.679654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.684214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.684573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.684606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.689602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.689928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.689953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.696228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.696622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.696710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.701671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.701969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.701995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.706484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.706801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.706854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.711406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.711689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.711714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.716120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.716474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.716503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.720928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.721438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:31.883 [2024-09-29 00:28:47.726132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:31.883 [2024-09-29 00:28:47.726465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.883 [2024-09-29 00:28:47.726490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.731571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.731851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.731877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.736921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.737202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.737228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.741772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.742050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.742076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.746597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.746875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.746900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.751340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.751617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.751643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.756005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.756290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.756316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.760833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.761127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.761152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.765632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.765933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.765959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.770418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.770698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.770723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.775135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.775439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.780161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.780531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.780560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.785349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.785695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.785755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.790706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.791025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.791052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.796024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.796426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.796469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.801622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.146 [2024-09-29 00:28:47.802015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.146 [2024-09-29 00:28:47.802044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.146 [2024-09-29 00:28:47.807085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.807411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.807448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.812661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.813050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.813078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.818490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.818786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.818812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.823530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.823843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.823870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.828304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.828643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.828686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.833179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.833502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.833524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.838079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.838363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.838388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.842898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.843222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.843249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.847756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.848030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.848056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.852661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.852981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.853008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.857592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.857925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.857952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.862455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.862728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.862754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.867236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.867540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.867567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.872180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.872535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.872564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.877068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.877386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.877409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.881832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.882104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.882130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.886710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.886987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.887013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.891480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.891757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.891784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.896164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.896528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.896556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.901141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.901469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.901496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.905971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.906254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.906280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.910888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.911193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.911220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.915814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.916097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.916123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.920856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.921151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.921178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.925695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.925987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.147 [2024-09-29 00:28:47.926012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.147 [2024-09-29 00:28:47.930491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.147 [2024-09-29 00:28:47.930765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.930791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.935315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.935606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.935633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.940050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.940359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.940386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.944876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.945173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.945199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.949774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.950051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.950078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.954577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.954851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.954876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.959320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.959630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.959655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.964111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.964464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.964493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.969074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.969384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.969419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.973945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.974225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.974252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.978816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.979096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.979123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.983662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.983964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.984023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.148 [2024-09-29 00:28:47.988827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.148 [2024-09-29 00:28:47.989228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.148 [2024-09-29 00:28:47.989260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:47.994255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:47.994581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:47.994610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:47.999690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.000062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.000092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.005432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.005804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.005850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.010556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.010853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.010896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.015255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.015541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.015567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.020178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.020511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.020541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.025047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.025326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.025362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.029865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.030139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.030165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.034643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.034966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.034993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.039789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.040085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.040112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.044872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.045191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.045217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.049986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.050268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.050294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.054827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.055120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.055147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.059612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.059888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.059914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.064313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.064700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.064741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.069322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.069692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.069721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.074826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.075131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.075160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.080549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.080859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.080888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.085970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.086276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.086306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.090855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.091146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.091172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.095689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.095970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.095996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.100495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.100821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.100848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.105259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.105551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.105577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.110129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.110460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.110486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.114965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.115245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.419 [2024-09-29 00:28:48.115270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.419 [2024-09-29 00:28:48.119669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.419 [2024-09-29 00:28:48.119944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.119970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.124729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.125033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.125059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.129520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.129797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.129823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.134228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.134571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.134598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.139019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.139297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.139323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.143686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.143961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.143987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.148527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.148877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.148904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.153244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.153529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.157939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.158221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.158247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.162704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.162991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.163017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.167429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.167707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.167732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.172245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.172570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.172614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.177018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.177296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.177322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.181776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.182070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.182096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.186622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.186912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.186937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.191319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.191610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.191635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.196065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.196385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.196407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.200857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.201165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.201192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.205665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.205955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.205981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.210477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.210753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.210778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.215339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.215630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.215655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.219973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.220246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.220271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.225579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.225901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.225959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.232169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.232554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.232581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.237196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.237503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.237529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.242074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.242385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.242421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.246864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.247136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.247162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.251646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.251907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.256425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.256770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.420 [2024-09-29 00:28:48.261199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.420 [2024-09-29 00:28:48.261543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.420 [2024-09-29 00:28:48.261587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.266551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.266880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.266923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.271712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.272067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.272094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.276582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.276891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.276916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.281328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.281614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.281640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.286100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.286425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.286452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.290891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.291201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.291228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.296265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.296647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.296719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.301240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.301543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.301568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.306394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.306692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.306719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.312133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.312498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.312527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.317743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.318060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.318090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.323461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.323782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.323810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.329218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.329610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.329648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.334672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.335062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.335097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.339877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.340194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.340222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.345160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.345545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.345584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.350429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.679 [2024-09-29 00:28:48.350790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.679 [2024-09-29 00:28:48.350842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.679 [2024-09-29 00:28:48.355973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.356285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.356314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.361169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.361513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.361544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.366336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.366700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.366728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.371280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.371584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.371610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.376371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.376680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.376710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.381496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.381824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.381853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.386471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.386800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.386829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.391587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.391877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.391905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.396544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.396897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.396927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.401706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.402021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.406876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.407182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.407212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.411891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.412230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.412260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.417288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.417629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.417657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.422559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.422878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.422905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.427590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.427889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.427933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.433010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.433327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.433364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.437993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.438308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.438347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.443146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.443486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.443515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.448217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.448562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.448592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.453265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.453589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.453617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.458398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.458681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.458709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.463555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.463908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.463936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.468969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.469272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.469303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.474030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.474334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.474383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.479098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.479468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.479495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.484273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.484646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.484705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.489458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.489750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.489777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.680 [2024-09-29 00:28:48.494703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.680 [2024-09-29 00:28:48.495044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.680 [2024-09-29 00:28:48.495072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.681 [2024-09-29 00:28:48.499892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.681 [2024-09-29 00:28:48.500226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.681 [2024-09-29 00:28:48.500255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.681 [2024-09-29 00:28:48.505128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.681 [2024-09-29 00:28:48.505471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.681 [2024-09-29 00:28:48.505508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.681 [2024-09-29 00:28:48.510346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.681 [2024-09-29 00:28:48.510631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.681 [2024-09-29 00:28:48.510657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.681 [2024-09-29 00:28:48.515382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.681 [2024-09-29 00:28:48.515690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.681 [2024-09-29 00:28:48.515716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.681 [2024-09-29 00:28:48.520433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.681 [2024-09-29 00:28:48.520790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.681 [2024-09-29 00:28:48.520833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.681 [2024-09-29 00:28:48.526226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.681 [2024-09-29 00:28:48.526623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.681 [2024-09-29 00:28:48.526653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.940 [2024-09-29 00:28:48.531927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.940 [2024-09-29 00:28:48.532282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.940 [2024-09-29 00:28:48.532310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.940 [2024-09-29 00:28:48.537025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.940 [2024-09-29 00:28:48.537367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.940 [2024-09-29 00:28:48.537404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.940 [2024-09-29 00:28:48.542105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.940 [2024-09-29 00:28:48.542432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.940 [2024-09-29 00:28:48.542458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.940 [2024-09-29 00:28:48.546867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.547143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.547169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.551581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.551857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.551883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.556367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.556691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.556731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.561249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.561571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.561597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.566110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.566416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.566442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.570920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.571196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.571222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.575668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.575941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.575967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.580520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.580842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.580869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.585396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.585687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.585708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.590194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.590519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.590546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.595088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.595401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.595423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.599847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.600128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.600155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.604598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.604938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.604964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.609606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.609919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.609960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.615052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.615407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.615445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.619965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.620238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.620264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.624761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.625053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.625079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.629652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.629941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.629969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.634402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.634679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.634705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.639131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.639434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.639461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.643942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.644217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.644242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.648778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.649080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.649107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.653576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.653868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.653893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.658299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.658587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.658613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.663084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.663402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.663427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.667892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.941 [2024-09-29 00:28:48.668175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.941 [2024-09-29 00:28:48.668201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.941 [2024-09-29 00:28:48.672853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.673148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.673174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.677666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.677957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.677983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.682592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.682883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.682909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.687447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.687722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.687749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.692226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.692571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.692600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.697063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.697357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.697391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.701817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.702093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.702119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.706555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.706831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.706857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.711370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.711653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.711679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.716130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.716487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.716515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.721000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.721291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.721318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.725751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.726024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.726050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.730652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.730932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.730958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.735553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.735853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.735879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.740454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.740808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.740833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.745331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.745651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.751027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.751357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.751393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.757785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.758088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.758114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.762661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.762940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.762966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.767455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.767755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.767780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.772859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.773190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.773219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.778096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.778415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.778442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:32.942 [2024-09-29 00:28:48.783293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:32.942 [2024-09-29 00:28:48.783693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.942 [2024-09-29 00:28:48.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.202 [2024-09-29 00:28:48.789039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.202 [2024-09-29 00:28:48.789324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.202 [2024-09-29 00:28:48.789373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.202 [2024-09-29 00:28:48.794109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.202 [2024-09-29 00:28:48.794447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.202 [2024-09-29 00:28:48.794484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.799052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.799381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.799420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.804035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.804328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.804384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.809083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.809388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.809425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.814458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.814778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.814805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.820054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.820386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.825237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.825589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.825618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.830283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.830674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.830703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.835543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.835916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.835942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.840697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.841004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.841031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.845609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.845901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.845926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.850244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.850531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.850557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.855000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.855280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.855320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.859784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.860065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.860091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.864623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.864943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.864969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.869731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.870059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.870086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.875140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.875469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.875505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.879884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.880161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.884841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.885115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.885141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.889601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.889875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.889900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.894271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.894558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.894584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.898991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.899275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.899301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.903793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.904069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.904094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.908511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.908859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.908885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.913375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.913688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.913713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.918105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.918393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.918419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.203 [2024-09-29 00:28:48.922876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.203 [2024-09-29 00:28:48.923183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.203 [2024-09-29 00:28:48.923209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.927651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.927924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.927950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.932556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.932913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.932939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.937304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.937617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.937642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.942129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.942433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.942459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.946863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.947159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.947185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.951582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.951860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.951885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.956243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.956598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.956640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.961178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.961504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.961530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.965926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.966204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.966229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.970957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.971299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.971326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.976222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.976579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.976608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.981560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.981835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.981861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.987004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.987343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.992190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.992534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.992563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:48.997620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:48.997912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:48.997941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.002879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.003218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.003247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.008092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.008458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.008487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.013517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.013833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.013860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.018747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.019072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.019100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.024110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.024485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.024514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.029667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.030017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.030044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.035116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.035500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.035526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.040431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.040780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.040805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.204 [2024-09-29 00:28:49.045833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.204 [2024-09-29 00:28:49.046161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.204 [2024-09-29 00:28:49.046191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.051522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.051795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.051821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.057093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.057451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.057489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.062057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.062332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.062368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.066842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.067118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.067143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.071659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.071949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.071975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.076399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.076759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.076784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.081222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.081509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.081535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.086054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.086337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.086374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.090860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.091144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.091170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.095657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.095948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.095975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.100546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.100899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.100925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.105241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.105547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.105574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.109924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.110201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.110227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.114718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.114994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.115020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.119441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.119719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.119745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.124193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.124542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.124570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.129399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.129705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.129731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.134643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.464 [2024-09-29 00:28:49.134916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.464 [2024-09-29 00:28:49.134942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.464 [2024-09-29 00:28:49.139493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.139788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.139815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.144128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.144497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.144526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.149051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.149327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.149364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.153792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.154063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.154089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.158585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.158868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.158894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.163268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.163557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.163583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.168098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.168453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.168481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.172910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.173183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.173209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.177688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.177969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.177995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.182418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.182691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.182717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.187232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.187549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.187577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.191948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.192230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.192257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.196792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.197067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.197093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.201507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.201780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.201806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.206229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.210956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.211232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.211258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.215826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.216125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.216151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.220618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.220926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.220952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.225512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.225812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.225837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.230210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.230517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.230543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.234925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.235202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.235228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.239629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.239918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.239944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.244845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.245146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.245174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.249762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.250035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.250061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.254483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.254757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.254784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.259083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.259369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.465 [2024-09-29 00:28:49.259395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.465 [2024-09-29 00:28:49.263932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.465 [2024-09-29 00:28:49.264267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.264294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.268897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.269185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.269212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.273623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.273898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.273924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.278352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.278624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.278650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.282975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.283251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.283277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.287680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.287970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.287996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.292384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.292710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.292751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.297192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.297498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.297524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.301928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.302203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.302229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.466 [2024-09-29 00:28:49.306729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.466 [2024-09-29 00:28:49.307009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.466 [2024-09-29 00:28:49.307035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.312118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.312483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.312513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.317270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.317603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.317629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.322115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.322416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.322443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.326860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.327135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.327160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.331492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.331769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.331795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.336176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.336515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.336543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.341035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.341316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.341352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.345734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.346007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.346033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.350568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.350845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.350871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.355205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.355528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.355568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.359910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.360191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.360218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.364941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.365238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.365264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.371586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.371913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.371939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.378118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.378439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.378466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.383310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.383611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.383637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.389450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.389796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.389824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.395892] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.396199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.396225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.401581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.401866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.401891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.406262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.726 [2024-09-29 00:28:49.406549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.726 [2024-09-29 00:28:49.406574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.726 [2024-09-29 00:28:49.411031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.411316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.411350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.415803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.416100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.416127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.420633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.421003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.421030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.425485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.425762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.425788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.430272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.430566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.430591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.435026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.435307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.435342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.439806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.440113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.440141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.444646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.444990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.445016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.449887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.450170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.450196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.455265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.455611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.455639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.460805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.461095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.461122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.466083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.466383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.466420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.471514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.471854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.471897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.477029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.477356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.477410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.482572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.482980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.483009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.488312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.488693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.488719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.493726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.494078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.494106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.499007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.499331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.499366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.504251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.504586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.504616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.509700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.510035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.510063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.515421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.515817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.515863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.520836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.521186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.521231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.526446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.526732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.526758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.531631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.531941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.531969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.537054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.537420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.537457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.542031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.542312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.727 [2024-09-29 00:28:49.542347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.727 [2024-09-29 00:28:49.546960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.727 [2024-09-29 00:28:49.547270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.728 [2024-09-29 00:28:49.547298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.728 [2024-09-29 00:28:49.552024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.728 [2024-09-29 00:28:49.552308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.728 [2024-09-29 00:28:49.552371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.728 [2024-09-29 00:28:49.557044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.728 [2024-09-29 00:28:49.557361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.728 [2024-09-29 00:28:49.557398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:33.728 [2024-09-29 00:28:49.562204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.728 [2024-09-29 00:28:49.562560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.728 [2024-09-29 00:28:49.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:33.728 [2024-09-29 00:28:49.567383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.728 [2024-09-29 00:28:49.567675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.728 [2024-09-29 00:28:49.567703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:33.728 [2024-09-29 00:28:49.572485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x241bf60) with pdu=0x2000190fef90 00:16:33.728 [2024-09-29 00:28:49.572623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.728 [2024-09-29 00:28:49.572647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:33.987 00:16:33.987 Latency(us) 00:16:33.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.987 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:33.987 nvme0n1 : 2.00 6164.09 770.51 0.00 0.00 2590.23 2115.03 7417.48 00:16:33.987 =================================================================================================================== 00:16:33.987 Total : 6164.09 770.51 0.00 0.00 2590.23 2115.03 7417.48 00:16:33.987 0 00:16:33.987 00:28:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:33.987 00:28:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:33.987 00:28:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:33.987 | .driver_specific 00:16:33.987 | .nvme_error 00:16:33.987 | .status_code 00:16:33.987 | .command_transient_transport_error' 00:16:33.987 00:28:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:34.246 00:28:49 -- host/digest.sh@71 -- # (( 398 > 0 )) 00:16:34.246 00:28:49 -- host/digest.sh@73 -- # killprocess 71767 00:16:34.246 00:28:49 -- common/autotest_common.sh@926 -- # '[' -z 71767 ']' 00:16:34.246 00:28:49 -- common/autotest_common.sh@930 -- # kill -0 71767 00:16:34.246 00:28:49 -- common/autotest_common.sh@931 -- # uname 00:16:34.246 00:28:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:34.246 00:28:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71767 00:16:34.246 00:28:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:34.246 00:28:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:34.246 killing process with pid 71767 00:16:34.246 00:28:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71767' 00:16:34.246 Received shutdown signal, test time was about 2.000000 seconds 00:16:34.246 00:16:34.246 Latency(us) 00:16:34.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.246 =================================================================================================================== 00:16:34.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:34.246 00:28:49 -- common/autotest_common.sh@945 -- # kill 71767 00:16:34.246 00:28:49 -- common/autotest_common.sh@950 -- # wait 71767 00:16:34.505 00:28:50 -- host/digest.sh@115 -- # killprocess 71553 00:16:34.505 00:28:50 -- common/autotest_common.sh@926 -- # '[' -z 71553 ']' 00:16:34.505 00:28:50 -- common/autotest_common.sh@930 -- # kill -0 71553 00:16:34.505 00:28:50 -- common/autotest_common.sh@931 -- # uname 00:16:34.505 00:28:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:34.505 00:28:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71553 00:16:34.505 00:28:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:34.506 00:28:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:34.506 killing process with pid 71553 00:16:34.506 00:28:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71553' 00:16:34.506 00:28:50 -- common/autotest_common.sh@945 -- # kill 71553 00:16:34.506 00:28:50 -- common/autotest_common.sh@950 -- # wait 71553 00:16:34.765 00:16:34.765 real 0m18.321s 00:16:34.765 user 0m35.844s 00:16:34.765 sys 0m4.562s 00:16:34.765 00:28:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.765 00:28:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.765 ************************************ 00:16:34.765 END TEST nvmf_digest_error 00:16:34.765 ************************************ 00:16:34.765 00:28:50 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:34.765 00:28:50 -- host/digest.sh@139 -- # nvmftestfini 00:16:34.765 00:28:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.765 00:28:50 -- nvmf/common.sh@116 -- # sync 00:16:34.765 00:28:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:34.765 00:28:50 -- nvmf/common.sh@119 -- # set +e 00:16:34.765 00:28:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.765 00:28:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:34.765 rmmod nvme_tcp 00:16:34.765 rmmod nvme_fabrics 00:16:34.765 rmmod nvme_keyring 00:16:34.765 00:28:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.765 00:28:50 -- nvmf/common.sh@123 -- # set -e 00:16:34.765 00:28:50 -- nvmf/common.sh@124 -- # return 0 00:16:34.765 00:28:50 -- nvmf/common.sh@477 -- # '[' -n 71553 ']' 00:16:34.765 00:28:50 -- nvmf/common.sh@478 -- # killprocess 71553 00:16:34.765 00:28:50 -- common/autotest_common.sh@926 -- # '[' -z 71553 ']' 00:16:34.765 00:28:50 -- common/autotest_common.sh@930 -- # kill -0 71553 00:16:34.765 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (71553) - No such process 00:16:34.765 Process with pid 71553 is not found 00:16:34.765 00:28:50 -- common/autotest_common.sh@953 -- # echo 'Process with pid 71553 is not found' 00:16:34.765 00:28:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:34.765 00:28:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:34.765 00:28:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:34.765 00:28:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.765 00:28:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:34.765 00:28:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.765 00:28:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.765 00:28:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.765 00:28:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:34.765 00:16:34.765 real 0m37.454s 00:16:34.765 user 1m12.272s 00:16:34.765 sys 0m9.274s 00:16:34.765 00:28:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.765 00:28:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.765 ************************************ 00:16:34.765 END TEST nvmf_digest 00:16:34.765 ************************************ 00:16:34.765 00:28:50 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:34.765 00:28:50 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:34.765 00:28:50 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:34.765 00:28:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:34.765 00:28:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:34.765 00:28:50 -- common/autotest_common.sh@10 -- # set +x 00:16:34.765 ************************************ 00:16:34.765 START TEST nvmf_multipath 00:16:34.765 ************************************ 00:16:34.765 00:28:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:35.024 * Looking for test storage... 00:16:35.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.024 00:28:50 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.024 00:28:50 -- nvmf/common.sh@7 -- # uname -s 00:16:35.024 00:28:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.024 00:28:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.024 00:28:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.024 00:28:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.024 00:28:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.024 00:28:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.024 00:28:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.024 00:28:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.024 00:28:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.024 00:28:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.024 00:28:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:16:35.024 00:28:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:16:35.024 00:28:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.024 00:28:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.024 00:28:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.024 00:28:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.024 00:28:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.024 00:28:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.024 00:28:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.025 00:28:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.025 00:28:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.025 00:28:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.025 00:28:50 -- paths/export.sh@5 -- # export PATH 00:16:35.025 00:28:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.025 00:28:50 -- nvmf/common.sh@46 -- # : 0 00:16:35.025 00:28:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:35.025 00:28:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:35.025 00:28:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:35.025 00:28:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.025 00:28:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.025 00:28:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:35.025 00:28:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:35.025 00:28:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:35.025 00:28:50 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.025 00:28:50 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.025 00:28:50 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.025 00:28:50 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:35.025 00:28:50 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.025 00:28:50 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:35.025 00:28:50 -- host/multipath.sh@30 -- # nvmftestinit 00:16:35.025 00:28:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:35.025 00:28:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.025 00:28:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:35.025 00:28:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:35.025 00:28:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:35.025 00:28:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.025 00:28:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.025 00:28:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.025 00:28:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:35.025 00:28:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:35.025 00:28:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:35.025 00:28:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:35.025 00:28:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:35.025 00:28:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:35.025 00:28:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.025 00:28:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.025 00:28:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.025 00:28:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:35.025 00:28:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.025 00:28:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.025 00:28:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.025 00:28:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.025 00:28:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.025 00:28:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.025 00:28:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.025 00:28:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.025 00:28:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:35.025 00:28:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:35.025 Cannot find device "nvmf_tgt_br" 00:16:35.025 00:28:50 -- nvmf/common.sh@154 -- # true 00:16:35.025 00:28:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.025 Cannot find device "nvmf_tgt_br2" 00:16:35.025 00:28:50 -- nvmf/common.sh@155 -- # true 00:16:35.025 00:28:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:35.025 00:28:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:35.025 Cannot find device "nvmf_tgt_br" 00:16:35.025 00:28:50 -- nvmf/common.sh@157 -- # true 00:16:35.025 00:28:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:35.025 Cannot find device "nvmf_tgt_br2" 00:16:35.025 00:28:50 -- nvmf/common.sh@158 -- # true 00:16:35.025 00:28:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:35.025 00:28:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:35.025 00:28:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.025 00:28:50 -- nvmf/common.sh@161 -- # true 00:16:35.025 00:28:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.025 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.025 00:28:50 -- nvmf/common.sh@162 -- # true 00:16:35.025 00:28:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.025 00:28:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.025 00:28:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.025 00:28:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.025 00:28:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.284 00:28:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.284 00:28:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.284 00:28:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.284 00:28:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.284 00:28:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:35.284 00:28:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:35.284 00:28:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:35.284 00:28:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:35.284 00:28:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.284 00:28:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.284 00:28:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.284 00:28:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:35.284 00:28:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:35.284 00:28:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.284 00:28:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.284 00:28:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.284 00:28:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.284 00:28:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.284 00:28:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:35.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:35.284 00:16:35.284 --- 10.0.0.2 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:35.284 00:28:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:35.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:35.284 00:16:35.284 --- 10.0.0.3 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:35.284 00:28:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:35.284 00:16:35.284 --- 10.0.0.1 ping statistics --- 00:16:35.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.284 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:35.284 00:28:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.284 00:28:51 -- nvmf/common.sh@421 -- # return 0 00:16:35.284 00:28:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.284 00:28:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.284 00:28:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.284 00:28:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.284 00:28:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.284 00:28:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.284 00:28:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.284 00:28:51 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:35.284 00:28:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.284 00:28:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:35.284 00:28:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.284 00:28:51 -- nvmf/common.sh@469 -- # nvmfpid=72038 00:16:35.284 00:28:51 -- nvmf/common.sh@470 -- # waitforlisten 72038 00:16:35.284 00:28:51 -- common/autotest_common.sh@819 -- # '[' -z 72038 ']' 00:16:35.284 00:28:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:35.284 00:28:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.284 00:28:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:35.284 00:28:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.284 00:28:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:35.284 00:28:51 -- common/autotest_common.sh@10 -- # set +x 00:16:35.284 [2024-09-29 00:28:51.099443] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:16:35.284 [2024-09-29 00:28:51.099525] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.543 [2024-09-29 00:28:51.235545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.543 [2024-09-29 00:28:51.303479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.543 [2024-09-29 00:28:51.303654] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.543 [2024-09-29 00:28:51.303669] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.543 [2024-09-29 00:28:51.303679] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.543 [2024-09-29 00:28:51.303853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.543 [2024-09-29 00:28:51.303867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.480 00:28:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:36.480 00:28:52 -- common/autotest_common.sh@852 -- # return 0 00:16:36.480 00:28:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:36.480 00:28:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:36.480 00:28:52 -- common/autotest_common.sh@10 -- # set +x 00:16:36.480 00:28:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.480 00:28:52 -- host/multipath.sh@33 -- # nvmfapp_pid=72038 00:16:36.480 00:28:52 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.740 [2024-09-29 00:28:52.383448] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.740 00:28:52 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:36.999 Malloc0 00:16:36.999 00:28:52 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:37.259 00:28:52 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.518 00:28:53 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:37.518 [2024-09-29 00:28:53.342854] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:37.518 00:28:53 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:37.777 [2024-09-29 00:28:53.566997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:37.777 00:28:53 -- host/multipath.sh@44 -- # bdevperf_pid=72088 00:16:37.777 00:28:53 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:37.777 00:28:53 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.777 00:28:53 -- host/multipath.sh@47 -- # waitforlisten 72088 /var/tmp/bdevperf.sock 00:16:37.777 00:28:53 -- common/autotest_common.sh@819 -- # '[' -z 72088 ']' 00:16:37.777 00:28:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:37.777 00:28:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:37.777 00:28:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:37.777 00:28:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.777 00:28:53 -- common/autotest_common.sh@10 -- # set +x 00:16:39.156 00:28:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.156 00:28:54 -- common/autotest_common.sh@852 -- # return 0 00:16:39.156 00:28:54 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:39.156 00:28:54 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:39.415 Nvme0n1 00:16:39.415 00:28:55 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:39.675 Nvme0n1 00:16:39.675 00:28:55 -- host/multipath.sh@78 -- # sleep 1 00:16:39.675 00:28:55 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:41.065 00:28:56 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:41.065 00:28:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:41.065 00:28:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:41.324 00:28:56 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:41.324 00:28:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:41.324 00:28:56 -- host/multipath.sh@65 -- # dtrace_pid=72139 00:16:41.324 00:28:56 -- host/multipath.sh@66 -- # sleep 6 00:16:47.892 00:29:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:47.892 00:29:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:47.892 00:29:03 -- host/multipath.sh@67 -- # active_port=4421 00:16:47.892 00:29:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.892 Attaching 4 probes... 00:16:47.892 @path[10.0.0.2, 4421]: 19872 00:16:47.892 @path[10.0.0.2, 4421]: 20143 00:16:47.892 @path[10.0.0.2, 4421]: 20055 00:16:47.892 @path[10.0.0.2, 4421]: 20357 00:16:47.892 @path[10.0.0.2, 4421]: 20026 00:16:47.892 00:29:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:47.892 00:29:03 -- host/multipath.sh@69 -- # sed -n 1p 00:16:47.892 00:29:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:47.892 00:29:03 -- host/multipath.sh@69 -- # port=4421 00:16:47.892 00:29:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:47.892 00:29:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:47.892 00:29:03 -- host/multipath.sh@72 -- # kill 72139 00:16:47.892 00:29:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:47.892 00:29:03 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:47.892 00:29:03 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:47.892 00:29:03 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:48.151 00:29:03 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:48.151 00:29:03 -- host/multipath.sh@65 -- # dtrace_pid=72255 00:16:48.151 00:29:03 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:48.151 00:29:03 -- host/multipath.sh@66 -- # sleep 6 00:16:54.719 00:29:09 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:54.719 00:29:09 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:54.719 00:29:10 -- host/multipath.sh@67 -- # active_port=4420 00:16:54.719 00:29:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.719 Attaching 4 probes... 00:16:54.719 @path[10.0.0.2, 4420]: 19775 00:16:54.719 @path[10.0.0.2, 4420]: 20019 00:16:54.719 @path[10.0.0.2, 4420]: 20149 00:16:54.719 @path[10.0.0.2, 4420]: 19784 00:16:54.719 @path[10.0.0.2, 4420]: 20255 00:16:54.719 00:29:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:54.719 00:29:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:54.719 00:29:10 -- host/multipath.sh@69 -- # sed -n 1p 00:16:54.719 00:29:10 -- host/multipath.sh@69 -- # port=4420 00:16:54.719 00:29:10 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:54.719 00:29:10 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:54.719 00:29:10 -- host/multipath.sh@72 -- # kill 72255 00:16:54.719 00:29:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:54.719 00:29:10 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:54.719 00:29:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:54.719 00:29:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:54.978 00:29:10 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:54.978 00:29:10 -- host/multipath.sh@65 -- # dtrace_pid=72373 00:16:54.978 00:29:10 -- host/multipath.sh@66 -- # sleep 6 00:16:54.978 00:29:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.543 00:29:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:01.543 00:29:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:01.543 00:29:16 -- host/multipath.sh@67 -- # active_port=4421 00:17:01.543 00:29:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:01.543 Attaching 4 probes... 00:17:01.543 @path[10.0.0.2, 4421]: 14751 00:17:01.543 @path[10.0.0.2, 4421]: 20228 00:17:01.543 @path[10.0.0.2, 4421]: 20124 00:17:01.543 @path[10.0.0.2, 4421]: 19703 00:17:01.543 @path[10.0.0.2, 4421]: 19986 00:17:01.543 00:29:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:01.543 00:29:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:01.543 00:29:16 -- host/multipath.sh@69 -- # sed -n 1p 00:17:01.543 00:29:16 -- host/multipath.sh@69 -- # port=4421 00:17:01.543 00:29:16 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:01.543 00:29:16 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:01.543 00:29:16 -- host/multipath.sh@72 -- # kill 72373 00:17:01.543 00:29:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:01.543 00:29:16 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:01.543 00:29:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:01.543 00:29:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:01.802 00:29:17 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:01.802 00:29:17 -- host/multipath.sh@65 -- # dtrace_pid=72491 00:17:01.802 00:29:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:01.802 00:29:17 -- host/multipath.sh@66 -- # sleep 6 00:17:08.390 00:29:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:08.390 00:29:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:08.390 00:29:23 -- host/multipath.sh@67 -- # active_port= 00:17:08.390 00:29:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.390 Attaching 4 probes... 00:17:08.390 00:17:08.390 00:17:08.390 00:17:08.390 00:17:08.390 00:17:08.390 00:29:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:08.390 00:29:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:08.390 00:29:23 -- host/multipath.sh@69 -- # sed -n 1p 00:17:08.390 00:29:23 -- host/multipath.sh@69 -- # port= 00:17:08.390 00:29:23 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:08.390 00:29:23 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:08.390 00:29:23 -- host/multipath.sh@72 -- # kill 72491 00:17:08.390 00:29:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:08.390 00:29:23 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:08.390 00:29:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:08.390 00:29:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:08.650 00:29:24 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:08.650 00:29:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.650 00:29:24 -- host/multipath.sh@65 -- # dtrace_pid=72599 00:17:08.650 00:29:24 -- host/multipath.sh@66 -- # sleep 6 00:17:15.227 00:29:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.227 00:29:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:15.227 00:29:30 -- host/multipath.sh@67 -- # active_port=4421 00:17:15.227 00:29:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.227 Attaching 4 probes... 00:17:15.227 @path[10.0.0.2, 4421]: 19532 00:17:15.227 @path[10.0.0.2, 4421]: 19789 00:17:15.227 @path[10.0.0.2, 4421]: 19741 00:17:15.227 @path[10.0.0.2, 4421]: 19287 00:17:15.227 @path[10.0.0.2, 4421]: 19574 00:17:15.227 00:29:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.227 00:29:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:15.227 00:29:30 -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.227 00:29:30 -- host/multipath.sh@69 -- # port=4421 00:17:15.227 00:29:30 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:15.227 00:29:30 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:15.227 00:29:30 -- host/multipath.sh@72 -- # kill 72599 00:17:15.227 00:29:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.227 00:29:30 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:15.227 [2024-09-29 00:29:30.761514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 [2024-09-29 00:29:30.761925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149e230 is same with the state(5) to be set 00:17:15.227 00:29:30 -- host/multipath.sh@101 -- # sleep 1 00:17:16.189 00:29:31 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:16.189 00:29:31 -- host/multipath.sh@65 -- # dtrace_pid=72728 00:17:16.189 00:29:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:16.189 00:29:31 -- host/multipath.sh@66 -- # sleep 6 00:17:22.755 00:29:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:22.755 00:29:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:22.755 00:29:38 -- host/multipath.sh@67 -- # active_port=4420 00:17:22.755 00:29:38 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.755 Attaching 4 probes... 00:17:22.755 @path[10.0.0.2, 4420]: 19092 00:17:22.755 @path[10.0.0.2, 4420]: 19309 00:17:22.755 @path[10.0.0.2, 4420]: 19428 00:17:22.755 @path[10.0.0.2, 4420]: 19395 00:17:22.755 @path[10.0.0.2, 4420]: 19552 00:17:22.755 00:29:38 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:22.755 00:29:38 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:22.755 00:29:38 -- host/multipath.sh@69 -- # sed -n 1p 00:17:22.755 00:29:38 -- host/multipath.sh@69 -- # port=4420 00:17:22.755 00:29:38 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:22.755 00:29:38 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:22.755 00:29:38 -- host/multipath.sh@72 -- # kill 72728 00:17:22.755 00:29:38 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:22.755 00:29:38 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:22.755 [2024-09-29 00:29:38.331834] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:22.755 00:29:38 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:23.014 00:29:38 -- host/multipath.sh@111 -- # sleep 6 00:17:29.605 00:29:44 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:29.605 00:29:44 -- host/multipath.sh@65 -- # dtrace_pid=72902 00:17:29.605 00:29:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72038 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:29.605 00:29:44 -- host/multipath.sh@66 -- # sleep 6 00:17:34.878 00:29:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:34.878 00:29:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:35.138 00:29:50 -- host/multipath.sh@67 -- # active_port=4421 00:17:35.138 00:29:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.138 Attaching 4 probes... 00:17:35.138 @path[10.0.0.2, 4421]: 18860 00:17:35.138 @path[10.0.0.2, 4421]: 19205 00:17:35.138 @path[10.0.0.2, 4421]: 19928 00:17:35.138 @path[10.0.0.2, 4421]: 19709 00:17:35.138 @path[10.0.0.2, 4421]: 19626 00:17:35.138 00:29:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:35.138 00:29:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:35.138 00:29:50 -- host/multipath.sh@69 -- # sed -n 1p 00:17:35.138 00:29:50 -- host/multipath.sh@69 -- # port=4421 00:17:35.138 00:29:50 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.138 00:29:50 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:35.138 00:29:50 -- host/multipath.sh@72 -- # kill 72902 00:17:35.138 00:29:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.138 00:29:50 -- host/multipath.sh@114 -- # killprocess 72088 00:17:35.138 00:29:50 -- common/autotest_common.sh@926 -- # '[' -z 72088 ']' 00:17:35.138 00:29:50 -- common/autotest_common.sh@930 -- # kill -0 72088 00:17:35.138 00:29:50 -- common/autotest_common.sh@931 -- # uname 00:17:35.138 00:29:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.138 00:29:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72088 00:17:35.138 killing process with pid 72088 00:17:35.138 00:29:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:35.138 00:29:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:35.138 00:29:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72088' 00:17:35.138 00:29:50 -- common/autotest_common.sh@945 -- # kill 72088 00:17:35.138 00:29:50 -- common/autotest_common.sh@950 -- # wait 72088 00:17:35.409 Connection closed with partial response: 00:17:35.409 00:17:35.409 00:17:35.409 00:29:51 -- host/multipath.sh@116 -- # wait 72088 00:17:35.409 00:29:51 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:35.409 [2024-09-29 00:28:53.640710] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:35.409 [2024-09-29 00:28:53.640836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72088 ] 00:17:35.409 [2024-09-29 00:28:53.780834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.409 [2024-09-29 00:28:53.848961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.409 Running I/O for 90 seconds... 00:17:35.409 [2024-09-29 00:29:03.819064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.409 [2024-09-29 00:29:03.819142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.409 [2024-09-29 00:29:03.819303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.409 [2024-09-29 00:29:03.819336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.409 [2024-09-29 00:29:03.819418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.409 [2024-09-29 00:29:03.819452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:35.409 [2024-09-29 00:29:03.819695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.409 [2024-09-29 00:29:03.819709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.819742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.819775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.819810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.819844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.819878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.819930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.819966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.819987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.820001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.820048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.820631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.820674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.820728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.820766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.820978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.820994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.410 [2024-09-29 00:29:03.821807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.410 [2024-09-29 00:29:03.821878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.410 [2024-09-29 00:29:03.821893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.821913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.821928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.821948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.821963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.821984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.821999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.822930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.411 [2024-09-29 00:29:03.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.822994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.823372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.825124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.411 [2024-09-29 00:29:03.825156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.411 [2024-09-29 00:29:03.825185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.825939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.825961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.825977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.826000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.826015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.826037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:03.826052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:03.826075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:03.826104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:10.381668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.381776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:10.381816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.381853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.381889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.381925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.381961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.381983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:10.382255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:10.382292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.412 [2024-09-29 00:29:10.382328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.412 [2024-09-29 00:29:10.382440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:35.412 [2024-09-29 00:29:10.382463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.382479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.382526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.382581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.382694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.382934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.382970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.383006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.383089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.383607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.413 [2024-09-29 00:29:10.383682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.413 [2024-09-29 00:29:10.383807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:35.413 [2024-09-29 00:29:10.383829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.383844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.383865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.383880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.383901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.383915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.383937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.383951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.383972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.383997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.384471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.384521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.384730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.384841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.384878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.384916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.384969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.384993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.385009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.414 [2024-09-29 00:29:10.385349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.414 [2024-09-29 00:29:10.385404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:35.414 [2024-09-29 00:29:10.385427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.385443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.385568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.385606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.385683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.385724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.385791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.385972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.385993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.386008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.386051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.387757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.387944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.387990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.388011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.388043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.388059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.388091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.388106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.388138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:10.388153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:10.388185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.415 [2024-09-29 00:29:10.388200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:17.454232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:17.454294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:35.415 [2024-09-29 00:29:17.454342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.415 [2024-09-29 00:29:17.454398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.454440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.454550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.454586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.454935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.454972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.454993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.455008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.455191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.416 [2024-09-29 00:29:17.455605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.455965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.455987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.416 [2024-09-29 00:29:17.456600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.416 [2024-09-29 00:29:17.456622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.456639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.456691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.456744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.456780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.456817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.456852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.456888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.456924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.456960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.456982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.456997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.457077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.457167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.457439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.457478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.457973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.457998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.458014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.458037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.458053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.458075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.458091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.458128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.458143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.459684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.459732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.417 [2024-09-29 00:29:17.459808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.417 [2024-09-29 00:29:17.459882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:35.417 [2024-09-29 00:29:17.459903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.459951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.459974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.459991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.460906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.460969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.460985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.461021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.461058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.461094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.461131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.461167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.461203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.461240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.418 [2024-09-29 00:29:17.461276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.461325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.418 [2024-09-29 00:29:17.461361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.418 [2024-09-29 00:29:17.461409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.461938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.461960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.461976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.462552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.462599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.462710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.462747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.462964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.462979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.463128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.463164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.419 [2024-09-29 00:29:17.463409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.419 [2024-09-29 00:29:17.463467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.419 [2024-09-29 00:29:17.463482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.463968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.463984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.464904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.464967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.464983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.465006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.465022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.465045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.420 [2024-09-29 00:29:17.465062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.465084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.420 [2024-09-29 00:29:17.465100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.420 [2024-09-29 00:29:17.465123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.465910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.465967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.465982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.421 [2024-09-29 00:29:17.466582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.421 [2024-09-29 00:29:17.466655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:35.421 [2024-09-29 00:29:17.466678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.466694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.466715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.466730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.466751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.466766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.466795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.466811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.466833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.466848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.466869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.466884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.466906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.466921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.468842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.468863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.468877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.469088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.469165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.469201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.469236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.469372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.422 [2024-09-29 00:29:17.469427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.422 [2024-09-29 00:29:17.469574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:35.422 [2024-09-29 00:29:17.469595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.469610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.469646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.469697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.469732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.469775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.469812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.469847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.469868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.479770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.479820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.479974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.479995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.480861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.480912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.480963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.480993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.481014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.481044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.481065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.481096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.481116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.481146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.481167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.481197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.481217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.481247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.423 [2024-09-29 00:29:17.481268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:35.423 [2024-09-29 00:29:17.481298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.423 [2024-09-29 00:29:17.481319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.481457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.481567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.481880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.481941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.481971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.481992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.482041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.482152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.482633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.482683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.482815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.482917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.482967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.482997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.483018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.483069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.483121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.483171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.483222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.483325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.424 [2024-09-29 00:29:17.483404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.424 [2024-09-29 00:29:17.483466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:35.424 [2024-09-29 00:29:17.483496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.483517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.483568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.483618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.483669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.483781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.483834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.483885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.483945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.483975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.483995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.484105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.484316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.484462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.484731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.484781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.484895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.484945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.484981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.485001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.485051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.485102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.485152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.485203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.485254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.485304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.485368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.425 [2024-09-29 00:29:17.485393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:17.486023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:17.486062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:30.761994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:30.762037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:30.762063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:30.762078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:30.762093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:30.762106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.425 [2024-09-29 00:29:30.762120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.425 [2024-09-29 00:29:30.762133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.762687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.762731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.762746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.763144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.763172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.763254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.763671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.426 [2024-09-29 00:29:30.763698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.763724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.426 [2024-09-29 00:29:30.763755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.426 [2024-09-29 00:29:30.763768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.763795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.763824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.763852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.763879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.763907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.763935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.763968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.763997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.427 [2024-09-29 00:29:30.764697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.427 [2024-09-29 00:29:30.764905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.427 [2024-09-29 00:29:30.764918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.764932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.764946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.764960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.764973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.764987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.765944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.765978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.765996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.766010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.766025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.766038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.766067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.766080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.766094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:35.428 [2024-09-29 00:29:30.766107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.766122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.428 [2024-09-29 00:29:30.766135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.428 [2024-09-29 00:29:30.766149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.429 [2024-09-29 00:29:30.766330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5c50 is same with the state(5) to be set 00:17:35.429 [2024-09-29 00:29:30.766361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:35.429 [2024-09-29 00:29:30.766371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:35.429 [2024-09-29 00:29:30.766382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3800 len:8 PRP1 0x0 PRP2 0x0 00:17:35.429 [2024-09-29 00:29:30.766407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.429 [2024-09-29 00:29:30.766452] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8c5c50 was disconnected and freed. reset controller. 00:17:35.429 [2024-09-29 00:29:30.767490] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:35.429 [2024-09-29 00:29:30.767571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2b20 (9): Bad file descriptor 00:17:35.429 [2024-09-29 00:29:30.767860] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:35.429 [2024-09-29 00:29:30.767930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:35.429 [2024-09-29 00:29:30.767977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:35.429 [2024-09-29 00:29:30.767997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a2b20 with addr=10.0.0.2, port=4421 00:17:35.429 [2024-09-29 00:29:30.768013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a2b20 is same with the state(5) to be set 00:17:35.429 [2024-09-29 00:29:30.768044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a2b20 (9): Bad file descriptor 00:17:35.429 [2024-09-29 00:29:30.768072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:35.429 [2024-09-29 00:29:30.768087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:35.429 [2024-09-29 00:29:30.768102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:35.429 [2024-09-29 00:29:30.768131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:35.429 [2024-09-29 00:29:30.768145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:35.429 [2024-09-29 00:29:40.822865] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:35.429 Received shutdown signal, test time was about 55.361453 seconds 00:17:35.429 00:17:35.429 Latency(us) 00:17:35.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.429 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:35.429 Verification LBA range: start 0x0 length 0x4000 00:17:35.429 Nvme0n1 : 55.36 11235.80 43.89 0.00 0.00 11374.55 296.03 7046430.72 00:17:35.429 =================================================================================================================== 00:17:35.429 Total : 11235.80 43.89 0.00 0.00 11374.55 296.03 7046430.72 00:17:35.429 00:29:51 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.688 00:29:51 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:35.688 00:29:51 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:35.688 00:29:51 -- host/multipath.sh@125 -- # nvmftestfini 00:17:35.688 00:29:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:35.688 00:29:51 -- nvmf/common.sh@116 -- # sync 00:17:35.688 00:29:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:35.688 00:29:51 -- nvmf/common.sh@119 -- # set +e 00:17:35.688 00:29:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:35.688 00:29:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:35.688 rmmod nvme_tcp 00:17:35.688 rmmod nvme_fabrics 00:17:35.688 rmmod nvme_keyring 00:17:35.688 00:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:35.688 00:29:51 -- nvmf/common.sh@123 -- # set -e 00:17:35.688 00:29:51 -- nvmf/common.sh@124 -- # return 0 00:17:35.688 00:29:51 -- nvmf/common.sh@477 -- # '[' -n 72038 ']' 00:17:35.688 00:29:51 -- nvmf/common.sh@478 -- # killprocess 72038 00:17:35.688 00:29:51 -- common/autotest_common.sh@926 -- # '[' -z 72038 ']' 00:17:35.688 00:29:51 -- common/autotest_common.sh@930 -- # kill -0 72038 00:17:35.688 00:29:51 -- common/autotest_common.sh@931 -- # uname 00:17:35.688 00:29:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.688 00:29:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72038 00:17:35.948 00:29:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:35.948 killing process with pid 72038 00:17:35.948 00:29:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:35.948 00:29:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72038' 00:17:35.948 00:29:51 -- common/autotest_common.sh@945 -- # kill 72038 00:17:35.948 00:29:51 -- common/autotest_common.sh@950 -- # wait 72038 00:17:35.948 00:29:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:35.948 00:29:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:35.948 00:29:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:35.948 00:29:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.948 00:29:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:35.948 00:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.948 00:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.948 00:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.948 00:29:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:35.948 00:17:35.948 real 1m1.180s 00:17:35.948 user 2m48.811s 00:17:35.948 sys 0m18.514s 00:17:35.948 00:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.948 ************************************ 00:17:35.948 END TEST nvmf_multipath 00:17:35.948 ************************************ 00:17:35.948 00:29:51 -- common/autotest_common.sh@10 -- # set +x 00:17:36.207 00:29:51 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:36.207 00:29:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:36.207 00:29:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:36.207 00:29:51 -- common/autotest_common.sh@10 -- # set +x 00:17:36.207 ************************************ 00:17:36.207 START TEST nvmf_timeout 00:17:36.207 ************************************ 00:17:36.207 00:29:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:36.207 * Looking for test storage... 00:17:36.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:36.207 00:29:51 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.207 00:29:51 -- nvmf/common.sh@7 -- # uname -s 00:17:36.207 00:29:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.207 00:29:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.207 00:29:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.207 00:29:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.207 00:29:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.207 00:29:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.207 00:29:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.207 00:29:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.207 00:29:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.207 00:29:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.207 00:29:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:17:36.207 00:29:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:17:36.207 00:29:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.207 00:29:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.207 00:29:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.207 00:29:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.207 00:29:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.207 00:29:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.207 00:29:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.207 00:29:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.207 00:29:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.207 00:29:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.207 00:29:51 -- paths/export.sh@5 -- # export PATH 00:17:36.207 00:29:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.207 00:29:51 -- nvmf/common.sh@46 -- # : 0 00:17:36.207 00:29:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.207 00:29:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.207 00:29:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.207 00:29:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.207 00:29:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.207 00:29:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.207 00:29:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.207 00:29:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.207 00:29:51 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.207 00:29:51 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.207 00:29:51 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.207 00:29:51 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:36.207 00:29:51 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.207 00:29:51 -- host/timeout.sh@19 -- # nvmftestinit 00:17:36.207 00:29:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:36.207 00:29:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.207 00:29:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:36.207 00:29:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:36.207 00:29:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:36.207 00:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.207 00:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.208 00:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.208 00:29:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:36.208 00:29:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:36.208 00:29:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:36.208 00:29:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:36.208 00:29:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:36.208 00:29:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:36.208 00:29:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.208 00:29:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.208 00:29:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:36.208 00:29:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:36.208 00:29:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.208 00:29:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.208 00:29:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.208 00:29:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.208 00:29:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.208 00:29:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.208 00:29:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.208 00:29:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.208 00:29:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:36.208 00:29:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:36.208 Cannot find device "nvmf_tgt_br" 00:17:36.208 00:29:51 -- nvmf/common.sh@154 -- # true 00:17:36.208 00:29:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.208 Cannot find device "nvmf_tgt_br2" 00:17:36.208 00:29:51 -- nvmf/common.sh@155 -- # true 00:17:36.208 00:29:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:36.208 00:29:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:36.208 Cannot find device "nvmf_tgt_br" 00:17:36.208 00:29:51 -- nvmf/common.sh@157 -- # true 00:17:36.208 00:29:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:36.208 Cannot find device "nvmf_tgt_br2" 00:17:36.208 00:29:51 -- nvmf/common.sh@158 -- # true 00:17:36.208 00:29:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:36.208 00:29:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:36.467 00:29:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.467 00:29:52 -- nvmf/common.sh@161 -- # true 00:17:36.467 00:29:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.467 00:29:52 -- nvmf/common.sh@162 -- # true 00:17:36.467 00:29:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.467 00:29:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.467 00:29:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.467 00:29:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.467 00:29:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.467 00:29:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.467 00:29:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.467 00:29:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:36.467 00:29:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:36.467 00:29:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:36.467 00:29:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:36.467 00:29:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:36.468 00:29:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:36.468 00:29:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.468 00:29:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.468 00:29:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.468 00:29:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:36.468 00:29:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:36.468 00:29:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.468 00:29:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.468 00:29:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.468 00:29:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.468 00:29:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.468 00:29:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:36.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:36.468 00:17:36.468 --- 10.0.0.2 ping statistics --- 00:17:36.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.468 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:36.468 00:29:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:36.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:36.468 00:17:36.468 --- 10.0.0.3 ping statistics --- 00:17:36.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.468 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:36.468 00:29:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:36.468 00:17:36.468 --- 10.0.0.1 ping statistics --- 00:17:36.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.468 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:36.468 00:29:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.468 00:29:52 -- nvmf/common.sh@421 -- # return 0 00:17:36.468 00:29:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:36.468 00:29:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.468 00:29:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:36.468 00:29:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:36.468 00:29:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.468 00:29:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:36.468 00:29:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:36.468 00:29:52 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:36.468 00:29:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:36.468 00:29:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:36.468 00:29:52 -- common/autotest_common.sh@10 -- # set +x 00:17:36.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.468 00:29:52 -- nvmf/common.sh@469 -- # nvmfpid=73208 00:17:36.468 00:29:52 -- nvmf/common.sh@470 -- # waitforlisten 73208 00:17:36.468 00:29:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:36.468 00:29:52 -- common/autotest_common.sh@819 -- # '[' -z 73208 ']' 00:17:36.468 00:29:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.468 00:29:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.468 00:29:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.468 00:29:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.468 00:29:52 -- common/autotest_common.sh@10 -- # set +x 00:17:36.734 [2024-09-29 00:29:52.343745] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:36.735 [2024-09-29 00:29:52.343851] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.735 [2024-09-29 00:29:52.482013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.735 [2024-09-29 00:29:52.536008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:36.735 [2024-09-29 00:29:52.536175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.735 [2024-09-29 00:29:52.536187] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.735 [2024-09-29 00:29:52.536196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.735 [2024-09-29 00:29:52.536440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.735 [2024-09-29 00:29:52.536447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.713 00:29:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:37.713 00:29:53 -- common/autotest_common.sh@852 -- # return 0 00:17:37.713 00:29:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:37.713 00:29:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:37.713 00:29:53 -- common/autotest_common.sh@10 -- # set +x 00:17:37.713 00:29:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.713 00:29:53 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.713 00:29:53 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:37.972 [2024-09-29 00:29:53.589777] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.972 00:29:53 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:38.231 Malloc0 00:17:38.231 00:29:53 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:38.489 00:29:54 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:38.749 00:29:54 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.006 [2024-09-29 00:29:54.629048] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.007 00:29:54 -- host/timeout.sh@32 -- # bdevperf_pid=73261 00:17:39.007 00:29:54 -- host/timeout.sh@34 -- # waitforlisten 73261 /var/tmp/bdevperf.sock 00:17:39.007 00:29:54 -- common/autotest_common.sh@819 -- # '[' -z 73261 ']' 00:17:39.007 00:29:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.007 00:29:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.007 00:29:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.007 00:29:54 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:39.007 00:29:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.007 00:29:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.007 [2024-09-29 00:29:54.705596] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:39.007 [2024-09-29 00:29:54.705701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73261 ] 00:17:39.007 [2024-09-29 00:29:54.845089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.265 [2024-09-29 00:29:54.915400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.831 00:29:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:39.831 00:29:55 -- common/autotest_common.sh@852 -- # return 0 00:17:39.831 00:29:55 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:40.089 00:29:55 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:40.346 NVMe0n1 00:17:40.346 00:29:56 -- host/timeout.sh@51 -- # rpc_pid=73283 00:17:40.346 00:29:56 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.346 00:29:56 -- host/timeout.sh@53 -- # sleep 1 00:17:40.604 Running I/O for 10 seconds... 00:17:41.538 00:29:57 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.798 [2024-09-29 00:29:57.404598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-09-29 00:29:57.404608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 id:0 cdw10:00000000 cdw11:00000000 00:17:41.798 [2024-09-29 00:29:57.404702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.798 [2024-09-29 00:29:57.404730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.798 [2024-09-29 00:29:57.404740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.798 [2024-09-29 00:29:57.404749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.798 [2024-09-29 00:29:57.404758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.798 [2024-09-29 00:29:57.404766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with t[2024-09-29 00:29:57.404775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:17:41.798 id:0 cdw10:00000000 cdw11:00000000 00:17:41.798 [2024-09-29 00:29:57.404782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.798 [2024-09-29 00:29:57.404791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300010 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.798 [2024-09-29 00:29:57.404847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.799 [2024-09-29 00:29:57.404855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6480 is same with the state(5) to be set 00:17:41.799 [2024-09-29 00:29:57.405403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.405928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.405940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.406868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.406890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.406910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.406961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.406971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.407359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.407403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.407960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.407971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.799 [2024-09-29 00:29:57.407981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.799 [2024-09-29 00:29:57.408352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.799 [2024-09-29 00:29:57.408372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.408396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.408461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.408482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.408614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.408636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.408987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.408997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.409467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.409896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.409918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.409951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.409961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.410866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.410878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.410987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.411004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.411014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.411154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.411166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.411179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.411278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.411299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.800 [2024-09-29 00:29:57.411310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.800 [2024-09-29 00:29:57.411322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.800 [2024-09-29 00:29:57.411558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.411576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.411587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.411599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.411608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.411620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.411630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.411641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.411651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.411933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.412052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.412536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.412579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.412615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.412884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.413020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.413166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.413304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.413318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.413599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.413704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.413723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.413734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.413747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.413894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.413909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.414190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.414476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.414620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.414715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.414732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.414743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.414755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.414765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.414777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.414787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.414798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.414808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.415202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.415245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.415266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.801 [2024-09-29 00:29:57.415671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.801 [2024-09-29 00:29:57.415727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.801 [2024-09-29 00:29:57.415737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.415748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.802 [2024-09-29 00:29:57.415980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:41.802 [2024-09-29 00:29:57.416129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.802 [2024-09-29 00:29:57.416291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.802 [2024-09-29 00:29:57.416542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.802 [2024-09-29 00:29:57.416565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.802 [2024-09-29 00:29:57.416587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13630c0 is same with the state(5) to be set 00:17:41.802 [2024-09-29 00:29:57.416612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:41.802 [2024-09-29 00:29:57.416621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:41.802 [2024-09-29 00:29:57.416630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125192 len:8 PRP1 0x0 PRP2 0x0 00:17:41.802 [2024-09-29 00:29:57.416640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.416650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:41.802 [2024-09-29 00:29:57.416899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:41.802 [2024-09-29 00:29:57.416909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125216 len:8 PRP1 0x0 PRP2 0x0 00:17:41.802 [2024-09-29 00:29:57.417151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.417179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:41.802 [2024-09-29 00:29:57.417189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:41.802 [2024-09-29 00:29:57.417199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125224 len:8 PRP1 0x0 PRP2 0x0 00:17:41.802 [2024-09-29 00:29:57.417208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.417219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:41.802 [2024-09-29 00:29:57.417227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:41.802 [2024-09-29 00:29:57.417235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125248 len:8 PRP1 0x0 PRP2 0x0 00:17:41.802 [2024-09-29 00:29:57.417362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.802 [2024-09-29 00:29:57.417617] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13630c0 was disconnected and freed. reset controller. 00:17:41.802 [2024-09-29 00:29:57.417691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300010 (9): Bad file descriptor 00:17:41.802 [2024-09-29 00:29:57.418164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.802 [2024-09-29 00:29:57.418309] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.802 [2024-09-29 00:29:57.418628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.802 [2024-09-29 00:29:57.418692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.802 [2024-09-29 00:29:57.418710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300010 with addr=10.0.0.2, port=4420 00:17:41.802 [2024-09-29 00:29:57.418722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300010 is same with the state(5) to be set 00:17:41.802 [2024-09-29 00:29:57.418850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300010 (9): Bad file descriptor 00:17:41.802 [2024-09-29 00:29:57.419127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.802 [2024-09-29 00:29:57.419256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:41.802 [2024-09-29 00:29:57.419275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.802 [2024-09-29 00:29:57.419532] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:41.802 [2024-09-29 00:29:57.419561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.802 00:29:57 -- host/timeout.sh@56 -- # sleep 2 00:17:43.700 [2024-09-29 00:29:59.419674] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.700 [2024-09-29 00:29:59.419800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.700 [2024-09-29 00:29:59.419842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:43.700 [2024-09-29 00:29:59.419874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300010 with addr=10.0.0.2, port=4420 00:17:43.700 [2024-09-29 00:29:59.420157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300010 is same with the state(5) to be set 00:17:43.700 [2024-09-29 00:29:59.420196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300010 (9): Bad file descriptor 00:17:43.700 [2024-09-29 00:29:59.420217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:43.700 [2024-09-29 00:29:59.420227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:43.700 [2024-09-29 00:29:59.420238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:43.700 [2024-09-29 00:29:59.420263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:43.700 [2024-09-29 00:29:59.420275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.700 00:29:59 -- host/timeout.sh@57 -- # get_controller 00:17:43.700 00:29:59 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:43.700 00:29:59 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:43.957 00:29:59 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:43.957 00:29:59 -- host/timeout.sh@58 -- # get_bdev 00:17:43.957 00:29:59 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:43.957 00:29:59 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:44.215 00:29:59 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:44.215 00:29:59 -- host/timeout.sh@61 -- # sleep 5 00:17:45.593 [2024-09-29 00:30:01.420444] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.593 [2024-09-29 00:30:01.420562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.593 [2024-09-29 00:30:01.420607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.593 [2024-09-29 00:30:01.420625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1300010 with addr=10.0.0.2, port=4420 00:17:45.593 [2024-09-29 00:30:01.420639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1300010 is same with the state(5) to be set 00:17:45.593 [2024-09-29 00:30:01.420684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1300010 (9): Bad file descriptor 00:17:45.593 [2024-09-29 00:30:01.420719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.593 [2024-09-29 00:30:01.420730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:45.593 [2024-09-29 00:30:01.420740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.593 [2024-09-29 00:30:01.420768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:45.593 [2024-09-29 00:30:01.420779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:48.123 [2024-09-29 00:30:03.420827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:48.123 [2024-09-29 00:30:03.420893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:48.123 [2024-09-29 00:30:03.420907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:48.123 [2024-09-29 00:30:03.420917] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:48.123 [2024-09-29 00:30:03.420943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:48.690 00:17:48.690 Latency(us) 00:17:48.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.690 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:48.690 Verification LBA range: start 0x0 length 0x4000 00:17:48.690 NVMe0n1 : 8.14 1915.84 7.48 15.72 0.00 66295.43 3008.70 7046430.72 00:17:48.690 =================================================================================================================== 00:17:48.690 Total : 1915.84 7.48 15.72 0.00 66295.43 3008.70 7046430.72 00:17:48.690 0 00:17:49.257 00:30:04 -- host/timeout.sh@62 -- # get_controller 00:17:49.257 00:30:04 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:49.257 00:30:04 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:49.515 00:30:05 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:49.515 00:30:05 -- host/timeout.sh@63 -- # get_bdev 00:17:49.515 00:30:05 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:49.515 00:30:05 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:49.774 00:30:05 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:49.774 00:30:05 -- host/timeout.sh@65 -- # wait 73283 00:17:49.774 00:30:05 -- host/timeout.sh@67 -- # killprocess 73261 00:17:49.774 00:30:05 -- common/autotest_common.sh@926 -- # '[' -z 73261 ']' 00:17:49.774 00:30:05 -- common/autotest_common.sh@930 -- # kill -0 73261 00:17:49.774 00:30:05 -- common/autotest_common.sh@931 -- # uname 00:17:49.774 00:30:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.774 00:30:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73261 00:17:49.774 killing process with pid 73261 00:17:49.774 Received shutdown signal, test time was about 9.192277 seconds 00:17:49.774 00:17:49.774 Latency(us) 00:17:49.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.774 =================================================================================================================== 00:17:49.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.774 00:30:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:49.774 00:30:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:49.774 00:30:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73261' 00:17:49.774 00:30:05 -- common/autotest_common.sh@945 -- # kill 73261 00:17:49.774 00:30:05 -- common/autotest_common.sh@950 -- # wait 73261 00:17:50.033 00:30:05 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.033 [2024-09-29 00:30:05.849615] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.033 00:30:05 -- host/timeout.sh@74 -- # bdevperf_pid=73404 00:17:50.033 00:30:05 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:50.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.033 00:30:05 -- host/timeout.sh@76 -- # waitforlisten 73404 /var/tmp/bdevperf.sock 00:17:50.033 00:30:05 -- common/autotest_common.sh@819 -- # '[' -z 73404 ']' 00:17:50.033 00:30:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.033 00:30:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.033 00:30:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.033 00:30:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.033 00:30:05 -- common/autotest_common.sh@10 -- # set +x 00:17:50.291 [2024-09-29 00:30:05.911702] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:17:50.291 [2024-09-29 00:30:05.911787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73404 ] 00:17:50.291 [2024-09-29 00:30:06.041424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.291 [2024-09-29 00:30:06.094542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.550 00:30:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:50.550 00:30:06 -- common/autotest_common.sh@852 -- # return 0 00:17:50.550 00:30:06 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:50.809 00:30:06 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:51.067 NVMe0n1 00:17:51.067 00:30:06 -- host/timeout.sh@84 -- # rpc_pid=73420 00:17:51.067 00:30:06 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:51.067 00:30:06 -- host/timeout.sh@86 -- # sleep 1 00:17:51.067 Running I/O for 10 seconds... 00:17:52.004 00:30:07 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.266 [2024-09-29 00:30:08.033919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.033975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9467b0 is same with the state(5) to be set 00:17:52.266 [2024-09-29 00:30:08.034232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.266 [2024-09-29 00:30:08.034554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.266 [2024-09-29 00:30:08.034565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.034783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.034792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.267 [2024-09-29 00:30:08.035703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.267 [2024-09-29 00:30:08.035912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.267 [2024-09-29 00:30:08.035922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.035930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.035940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.035948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.035959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.035967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.035977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.035985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.035995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.268 [2024-09-29 00:30:08.036766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.268 [2024-09-29 00:30:08.036786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.268 [2024-09-29 00:30:08.036796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.036824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.036982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.036993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.269 [2024-09-29 00:30:08.037378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.037408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.037416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.038659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.039077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.039469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.039877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.040289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.269 [2024-09-29 00:30:08.040713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.041109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9db0c0 is same with the state(5) to be set 00:17:52.269 [2024-09-29 00:30:08.041519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:52.269 [2024-09-29 00:30:08.041727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:52.269 [2024-09-29 00:30:08.041943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128776 len:8 PRP1 0x0 PRP2 0x0 00:17:52.269 [2024-09-29 00:30:08.042220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.042277] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9db0c0 was disconnected and freed. reset controller. 00:17:52.269 [2024-09-29 00:30:08.042398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.269 [2024-09-29 00:30:08.042417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.042428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.269 [2024-09-29 00:30:08.042438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.042448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.269 [2024-09-29 00:30:08.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.269 [2024-09-29 00:30:08.042468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:52.269 [2024-09-29 00:30:08.042477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:52.270 [2024-09-29 00:30:08.042486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:17:52.270 [2024-09-29 00:30:08.042709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:52.270 [2024-09-29 00:30:08.042731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:17:52.270 [2024-09-29 00:30:08.042830] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.270 [2024-09-29 00:30:08.042896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.270 [2024-09-29 00:30:08.042943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:52.270 [2024-09-29 00:30:08.042961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978010 with addr=10.0.0.2, port=4420 00:17:52.270 [2024-09-29 00:30:08.042972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:17:52.270 [2024-09-29 00:30:08.042990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:17:52.270 [2024-09-29 00:30:08.043007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:52.270 [2024-09-29 00:30:08.043018] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:52.270 [2024-09-29 00:30:08.043028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:52.270 [2024-09-29 00:30:08.043049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:52.270 [2024-09-29 00:30:08.043061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:52.270 00:30:08 -- host/timeout.sh@90 -- # sleep 1 00:17:53.204 [2024-09-29 00:30:09.043185] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.204 [2024-09-29 00:30:09.043818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.204 [2024-09-29 00:30:09.044101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:53.204 [2024-09-29 00:30:09.044377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978010 with addr=10.0.0.2, port=4420 00:17:53.204 [2024-09-29 00:30:09.044819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:17:53.204 [2024-09-29 00:30:09.045242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:17:53.204 [2024-09-29 00:30:09.045696] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:53.204 [2024-09-29 00:30:09.046071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:53.204 [2024-09-29 00:30:09.046502] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.204 [2024-09-29 00:30:09.046765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:53.204 [2024-09-29 00:30:09.046978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.462 00:30:09 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.462 [2024-09-29 00:30:09.299536] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.720 00:30:09 -- host/timeout.sh@92 -- # wait 73420 00:17:54.286 [2024-09-29 00:30:10.062732] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:02.421 00:18:02.421 Latency(us) 00:18:02.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.421 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.421 Verification LBA range: start 0x0 length 0x4000 00:18:02.421 NVMe0n1 : 10.01 9888.27 38.63 0.00 0.00 12917.61 997.93 3019898.88 00:18:02.421 =================================================================================================================== 00:18:02.421 Total : 9888.27 38.63 0.00 0.00 12917.61 997.93 3019898.88 00:18:02.421 0 00:18:02.421 00:30:16 -- host/timeout.sh@97 -- # rpc_pid=73529 00:18:02.421 00:30:16 -- host/timeout.sh@98 -- # sleep 1 00:18:02.421 00:30:16 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.421 Running I/O for 10 seconds... 00:18:02.421 00:30:17 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.421 [2024-09-29 00:30:18.153976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.421 [2024-09-29 00:30:18.154086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.422 [2024-09-29 00:30:18.154094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.422 [2024-09-29 00:30:18.154102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.422 [2024-09-29 00:30:18.154109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9454a0 is same with the state(5) to be set 00:18:02.422 [2024-09-29 00:30:18.154163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.154517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.154541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.154560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.154638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.154789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.154798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.155233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.155351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.155374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.155394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.155417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.155437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.155457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.155478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.155498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.422 [2024-09-29 00:30:18.155582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.422 [2024-09-29 00:30:18.155594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.422 [2024-09-29 00:30:18.155603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.155614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.155623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.155634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.155643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.423 [2024-09-29 00:30:18.156769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.423 [2024-09-29 00:30:18.156862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.423 [2024-09-29 00:30:18.156871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.156882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.156891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.156903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.156912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.157583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.157834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.157968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.424 [2024-09-29 00:30:18.158733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.158753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.158765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.424 [2024-09-29 00:30:18.159146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.424 [2024-09-29 00:30:18.159534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.425 [2024-09-29 00:30:18.159762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.425 [2024-09-29 00:30:18.159793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.425 [2024-09-29 00:30:18.159951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.159961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9eecc0 is same with the state(5) to be set 00:18:02.425 [2024-09-29 00:30:18.159974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:02.425 [2024-09-29 00:30:18.159981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:02.425 [2024-09-29 00:30:18.159990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129176 len:8 PRP1 0x0 PRP2 0x0 00:18:02.425 [2024-09-29 00:30:18.159999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.160042] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9eecc0 was disconnected and freed. reset controller. 00:18:02.425 [2024-09-29 00:30:18.160154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.425 [2024-09-29 00:30:18.160171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.160182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.425 [2024-09-29 00:30:18.160192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.160201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.425 [2024-09-29 00:30:18.160210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.160219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:02.425 [2024-09-29 00:30:18.160228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:02.425 [2024-09-29 00:30:18.160236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:18:02.425 [2024-09-29 00:30:18.160496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:02.425 [2024-09-29 00:30:18.160521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:18:02.425 [2024-09-29 00:30:18.160618] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.425 [2024-09-29 00:30:18.160673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.425 [2024-09-29 00:30:18.160717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:02.425 [2024-09-29 00:30:18.160748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978010 with addr=10.0.0.2, port=4420 00:18:02.425 [2024-09-29 00:30:18.160759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:18:02.425 [2024-09-29 00:30:18.160793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:18:02.425 [2024-09-29 00:30:18.161076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:02.425 [2024-09-29 00:30:18.161092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:02.425 [2024-09-29 00:30:18.161103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:02.425 [2024-09-29 00:30:18.161125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:02.425 [2024-09-29 00:30:18.161137] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:02.425 00:30:18 -- host/timeout.sh@101 -- # sleep 3 00:18:03.359 [2024-09-29 00:30:19.161265] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.359 [2024-09-29 00:30:19.161701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.359 [2024-09-29 00:30:19.161761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:03.359 [2024-09-29 00:30:19.161782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978010 with addr=10.0.0.2, port=4420 00:18:03.359 [2024-09-29 00:30:19.161797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:18:03.359 [2024-09-29 00:30:19.161835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:18:03.359 [2024-09-29 00:30:19.161854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:03.359 [2024-09-29 00:30:19.161865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:03.359 [2024-09-29 00:30:19.161876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:03.359 [2024-09-29 00:30:19.161905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:03.359 [2024-09-29 00:30:19.161917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:04.806 [2024-09-29 00:30:20.162051] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.806 [2024-09-29 00:30:20.162169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.806 [2024-09-29 00:30:20.162211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:04.806 [2024-09-29 00:30:20.162227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978010 with addr=10.0.0.2, port=4420 00:18:04.806 [2024-09-29 00:30:20.162241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:18:04.806 [2024-09-29 00:30:20.162267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:18:04.806 [2024-09-29 00:30:20.162284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:04.806 [2024-09-29 00:30:20.162293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:04.806 [2024-09-29 00:30:20.162303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:04.806 [2024-09-29 00:30:20.162328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:04.806 [2024-09-29 00:30:20.162338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.431 [2024-09-29 00:30:21.163506] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.431 [2024-09-29 00:30:21.163625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.431 [2024-09-29 00:30:21.163670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:05.431 [2024-09-29 00:30:21.163687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x978010 with addr=10.0.0.2, port=4420 00:18:05.431 [2024-09-29 00:30:21.163701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x978010 is same with the state(5) to be set 00:18:05.431 [2024-09-29 00:30:21.163874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x978010 (9): Bad file descriptor 00:18:05.431 [2024-09-29 00:30:21.164065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:05.431 [2024-09-29 00:30:21.164078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:05.431 [2024-09-29 00:30:21.164103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:05.431 [2024-09-29 00:30:21.166993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:05.431 [2024-09-29 00:30:21.167030] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.431 00:30:21 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.688 [2024-09-29 00:30:21.422997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.688 00:30:21 -- host/timeout.sh@103 -- # wait 73529 00:18:06.621 [2024-09-29 00:30:22.196291] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:11.888 00:18:11.888 Latency(us) 00:18:11.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.888 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.888 Verification LBA range: start 0x0 length 0x4000 00:18:11.888 NVMe0n1 : 10.01 8435.08 32.95 5915.79 0.00 8902.60 875.05 3019898.88 00:18:11.888 =================================================================================================================== 00:18:11.888 Total : 8435.08 32.95 5915.79 0.00 8902.60 0.00 3019898.88 00:18:11.888 0 00:18:11.888 00:30:27 -- host/timeout.sh@105 -- # killprocess 73404 00:18:11.888 00:30:27 -- common/autotest_common.sh@926 -- # '[' -z 73404 ']' 00:18:11.888 00:30:27 -- common/autotest_common.sh@930 -- # kill -0 73404 00:18:11.888 00:30:27 -- common/autotest_common.sh@931 -- # uname 00:18:11.888 00:30:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:11.888 00:30:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73404 00:18:11.888 killing process with pid 73404 00:18:11.888 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.888 00:18:11.888 Latency(us) 00:18:11.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.888 =================================================================================================================== 00:18:11.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.888 00:30:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:11.888 00:30:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:11.888 00:30:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73404' 00:18:11.888 00:30:27 -- common/autotest_common.sh@945 -- # kill 73404 00:18:11.888 00:30:27 -- common/autotest_common.sh@950 -- # wait 73404 00:18:11.888 00:30:27 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:11.888 00:30:27 -- host/timeout.sh@110 -- # bdevperf_pid=73639 00:18:11.888 00:30:27 -- host/timeout.sh@112 -- # waitforlisten 73639 /var/tmp/bdevperf.sock 00:18:11.888 00:30:27 -- common/autotest_common.sh@819 -- # '[' -z 73639 ']' 00:18:11.888 00:30:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.888 00:30:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.888 00:30:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.888 00:30:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.888 00:30:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.888 [2024-09-29 00:30:27.294384] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:11.888 [2024-09-29 00:30:27.295322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73639 ] 00:18:11.888 [2024-09-29 00:30:27.431433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.888 [2024-09-29 00:30:27.489190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.455 00:30:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.455 00:30:28 -- common/autotest_common.sh@852 -- # return 0 00:18:12.455 00:30:28 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 73639 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:12.455 00:30:28 -- host/timeout.sh@116 -- # dtrace_pid=73655 00:18:12.455 00:30:28 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:13.024 00:30:28 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:13.283 NVMe0n1 00:18:13.283 00:30:28 -- host/timeout.sh@124 -- # rpc_pid=73702 00:18:13.283 00:30:28 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.283 00:30:28 -- host/timeout.sh@125 -- # sleep 1 00:18:13.283 Running I/O for 10 seconds... 00:18:14.220 00:30:29 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.481 [2024-09-29 00:30:30.201098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.201988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.201998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.481 [2024-09-29 00:30:30.202378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.481 [2024-09-29 00:30:30.202388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.202988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.202997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.482 [2024-09-29 00:30:30.203142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.482 [2024-09-29 00:30:30.203153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.483 [2024-09-29 00:30:30.203889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.483 [2024-09-29 00:30:30.203899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.203907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.203917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.203925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.203935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.203943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.203953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.203961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.203971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.203979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.203989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.203997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.204015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.204033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.204051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.484 [2024-09-29 00:30:30.204070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f30c0 is same with the state(5) to be set 00:18:14.484 [2024-09-29 00:30:30.204093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.484 [2024-09-29 00:30:30.204100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.484 [2024-09-29 00:30:30.204108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91016 len:8 PRP1 0x0 PRP2 0x0 00:18:14.484 [2024-09-29 00:30:30.204119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204159] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f30c0 was disconnected and freed. reset controller. 00:18:14.484 [2024-09-29 00:30:30.204241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.484 [2024-09-29 00:30:30.204257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.484 [2024-09-29 00:30:30.204276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.484 [2024-09-29 00:30:30.204293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.484 [2024-09-29 00:30:30.204309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.484 [2024-09-29 00:30:30.204317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1590010 is same with the state(5) to be set 00:18:14.484 [2024-09-29 00:30:30.204636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.484 [2024-09-29 00:30:30.204663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1590010 (9): Bad file descriptor 00:18:14.484 [2024-09-29 00:30:30.204795] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.484 [2024-09-29 00:30:30.204860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.484 [2024-09-29 00:30:30.205969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:14.484 [2024-09-29 00:30:30.206001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1590010 with addr=10.0.0.2, port=4420 00:18:14.484 [2024-09-29 00:30:30.206013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1590010 is same with the state(5) to be set 00:18:14.484 [2024-09-29 00:30:30.206037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1590010 (9): Bad file descriptor 00:18:14.484 [2024-09-29 00:30:30.206070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:14.484 [2024-09-29 00:30:30.206082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:14.484 [2024-09-29 00:30:30.206092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:14.484 [2024-09-29 00:30:30.206115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.484 [2024-09-29 00:30:30.206126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:14.484 00:30:30 -- host/timeout.sh@128 -- # wait 73702 00:18:16.387 [2024-09-29 00:30:32.206269] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.387 [2024-09-29 00:30:32.206381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.387 [2024-09-29 00:30:32.206428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.387 [2024-09-29 00:30:32.206445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1590010 with addr=10.0.0.2, port=4420 00:18:16.387 [2024-09-29 00:30:32.206475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1590010 is same with the state(5) to be set 00:18:16.387 [2024-09-29 00:30:32.206500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1590010 (9): Bad file descriptor 00:18:16.387 [2024-09-29 00:30:32.206519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.387 [2024-09-29 00:30:32.206529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:16.387 [2024-09-29 00:30:32.206539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.387 [2024-09-29 00:30:32.206564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:16.387 [2024-09-29 00:30:32.206575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:18.922 [2024-09-29 00:30:34.206731] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.922 [2024-09-29 00:30:34.206856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.922 [2024-09-29 00:30:34.206900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:18.922 [2024-09-29 00:30:34.206917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1590010 with addr=10.0.0.2, port=4420 00:18:18.922 [2024-09-29 00:30:34.206929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1590010 is same with the state(5) to be set 00:18:18.922 [2024-09-29 00:30:34.206953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1590010 (9): Bad file descriptor 00:18:18.922 [2024-09-29 00:30:34.206972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.922 [2024-09-29 00:30:34.206981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:18.922 [2024-09-29 00:30:34.206990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.922 [2024-09-29 00:30:34.207016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:18.922 [2024-09-29 00:30:34.207027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.830 [2024-09-29 00:30:36.207102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.830 [2024-09-29 00:30:36.207166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:20.830 [2024-09-29 00:30:36.207194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:20.830 [2024-09-29 00:30:36.207205] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:20.830 [2024-09-29 00:30:36.207230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:21.398 00:18:21.398 Latency(us) 00:18:21.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.398 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:21.398 NVMe0n1 : 8.18 2282.33 8.92 15.65 0.00 55656.49 7119.59 7015926.69 00:18:21.398 =================================================================================================================== 00:18:21.398 Total : 2282.33 8.92 15.65 0.00 55656.49 7119.59 7015926.69 00:18:21.398 0 00:18:21.398 00:30:37 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.398 Attaching 5 probes... 00:18:21.398 1425.077555: reset bdev controller NVMe0 00:18:21.398 1425.167825: reconnect bdev controller NVMe0 00:18:21.398 3426.616975: reconnect delay bdev controller NVMe0 00:18:21.398 3426.635417: reconnect bdev controller NVMe0 00:18:21.398 5427.071076: reconnect delay bdev controller NVMe0 00:18:21.398 5427.089998: reconnect bdev controller NVMe0 00:18:21.398 7427.526008: reconnect delay bdev controller NVMe0 00:18:21.398 7427.564397: reconnect bdev controller NVMe0 00:18:21.398 00:30:37 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:21.398 00:30:37 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:21.398 00:30:37 -- host/timeout.sh@136 -- # kill 73655 00:18:21.398 00:30:37 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:21.398 00:30:37 -- host/timeout.sh@139 -- # killprocess 73639 00:18:21.398 00:30:37 -- common/autotest_common.sh@926 -- # '[' -z 73639 ']' 00:18:21.398 00:30:37 -- common/autotest_common.sh@930 -- # kill -0 73639 00:18:21.398 00:30:37 -- common/autotest_common.sh@931 -- # uname 00:18:21.658 00:30:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:21.658 00:30:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73639 00:18:21.658 killing process with pid 73639 00:18:21.658 Received shutdown signal, test time was about 8.250330 seconds 00:18:21.658 00:18:21.658 Latency(us) 00:18:21.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.658 =================================================================================================================== 00:18:21.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.658 00:30:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:21.658 00:30:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:21.658 00:30:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73639' 00:18:21.658 00:30:37 -- common/autotest_common.sh@945 -- # kill 73639 00:18:21.658 00:30:37 -- common/autotest_common.sh@950 -- # wait 73639 00:18:21.658 00:30:37 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.917 00:30:37 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:21.917 00:30:37 -- host/timeout.sh@145 -- # nvmftestfini 00:18:21.917 00:30:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:21.917 00:30:37 -- nvmf/common.sh@116 -- # sync 00:18:21.917 00:30:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:21.917 00:30:37 -- nvmf/common.sh@119 -- # set +e 00:18:21.917 00:30:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:21.917 00:30:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:21.917 rmmod nvme_tcp 00:18:21.917 rmmod nvme_fabrics 00:18:22.176 rmmod nvme_keyring 00:18:22.176 00:30:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.176 00:30:37 -- nvmf/common.sh@123 -- # set -e 00:18:22.176 00:30:37 -- nvmf/common.sh@124 -- # return 0 00:18:22.176 00:30:37 -- nvmf/common.sh@477 -- # '[' -n 73208 ']' 00:18:22.176 00:30:37 -- nvmf/common.sh@478 -- # killprocess 73208 00:18:22.176 00:30:37 -- common/autotest_common.sh@926 -- # '[' -z 73208 ']' 00:18:22.176 00:30:37 -- common/autotest_common.sh@930 -- # kill -0 73208 00:18:22.176 00:30:37 -- common/autotest_common.sh@931 -- # uname 00:18:22.176 00:30:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:22.176 00:30:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73208 00:18:22.176 killing process with pid 73208 00:18:22.176 00:30:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:22.176 00:30:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:22.176 00:30:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73208' 00:18:22.176 00:30:37 -- common/autotest_common.sh@945 -- # kill 73208 00:18:22.176 00:30:37 -- common/autotest_common.sh@950 -- # wait 73208 00:18:22.435 00:30:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.435 00:30:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.435 00:30:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.435 00:30:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.435 00:30:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.435 00:30:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.435 00:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.435 00:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.435 00:30:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:22.435 ************************************ 00:18:22.435 END TEST nvmf_timeout 00:18:22.435 ************************************ 00:18:22.435 00:18:22.435 real 0m46.242s 00:18:22.435 user 2m15.719s 00:18:22.435 sys 0m5.299s 00:18:22.435 00:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.435 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.435 00:30:38 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:22.435 00:30:38 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:22.435 00:30:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:22.435 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.435 00:30:38 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:22.435 00:18:22.435 real 10m34.293s 00:18:22.435 user 29m34.283s 00:18:22.435 sys 3m20.894s 00:18:22.435 00:30:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.435 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.435 ************************************ 00:18:22.435 END TEST nvmf_tcp 00:18:22.435 ************************************ 00:18:22.435 00:30:38 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:18:22.435 00:30:38 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:22.435 00:30:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:22.435 00:30:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:22.435 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:18:22.435 ************************************ 00:18:22.435 START TEST nvmf_dif 00:18:22.435 ************************************ 00:18:22.435 00:30:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:22.435 * Looking for test storage... 00:18:22.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:22.435 00:30:38 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.435 00:30:38 -- nvmf/common.sh@7 -- # uname -s 00:18:22.435 00:30:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.435 00:30:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.435 00:30:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.435 00:30:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.435 00:30:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.435 00:30:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.435 00:30:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.435 00:30:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.435 00:30:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.435 00:30:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.693 00:30:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:18:22.693 00:30:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:18:22.693 00:30:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.693 00:30:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.693 00:30:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.693 00:30:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.693 00:30:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.693 00:30:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.693 00:30:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.693 00:30:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.693 00:30:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.693 00:30:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.693 00:30:38 -- paths/export.sh@5 -- # export PATH 00:18:22.693 00:30:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.693 00:30:38 -- nvmf/common.sh@46 -- # : 0 00:18:22.693 00:30:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.693 00:30:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.693 00:30:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.693 00:30:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.693 00:30:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.693 00:30:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.693 00:30:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.693 00:30:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.693 00:30:38 -- target/dif.sh@15 -- # NULL_META=16 00:18:22.693 00:30:38 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:22.693 00:30:38 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:22.693 00:30:38 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:22.693 00:30:38 -- target/dif.sh@135 -- # nvmftestinit 00:18:22.693 00:30:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.693 00:30:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.693 00:30:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.693 00:30:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.693 00:30:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.694 00:30:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.694 00:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:22.694 00:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.694 00:30:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:22.694 00:30:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:22.694 00:30:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:22.694 00:30:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:22.694 00:30:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:22.694 00:30:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:22.694 00:30:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.694 00:30:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.694 00:30:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.694 00:30:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:22.694 00:30:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.694 00:30:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.694 00:30:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.694 00:30:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.694 00:30:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.694 00:30:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.694 00:30:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.694 00:30:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.694 00:30:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:22.694 00:30:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:22.694 Cannot find device "nvmf_tgt_br" 00:18:22.694 00:30:38 -- nvmf/common.sh@154 -- # true 00:18:22.694 00:30:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.694 Cannot find device "nvmf_tgt_br2" 00:18:22.694 00:30:38 -- nvmf/common.sh@155 -- # true 00:18:22.694 00:30:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:22.694 00:30:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:22.694 Cannot find device "nvmf_tgt_br" 00:18:22.694 00:30:38 -- nvmf/common.sh@157 -- # true 00:18:22.694 00:30:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:22.694 Cannot find device "nvmf_tgt_br2" 00:18:22.694 00:30:38 -- nvmf/common.sh@158 -- # true 00:18:22.694 00:30:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:22.694 00:30:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:22.694 00:30:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.694 00:30:38 -- nvmf/common.sh@161 -- # true 00:18:22.694 00:30:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.694 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.694 00:30:38 -- nvmf/common.sh@162 -- # true 00:18:22.694 00:30:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.694 00:30:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.694 00:30:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.694 00:30:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.694 00:30:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.694 00:30:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.694 00:30:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.694 00:30:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:22.694 00:30:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:22.694 00:30:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:22.694 00:30:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:22.694 00:30:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:22.694 00:30:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:22.694 00:30:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.694 00:30:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.694 00:30:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.952 00:30:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:22.952 00:30:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:22.952 00:30:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.952 00:30:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.952 00:30:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.952 00:30:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.952 00:30:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.952 00:30:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:22.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:18:22.952 00:18:22.952 --- 10.0.0.2 ping statistics --- 00:18:22.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.952 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:22.952 00:30:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:22.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:22.952 00:18:22.952 --- 10.0.0.3 ping statistics --- 00:18:22.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.952 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:22.952 00:30:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:22.952 00:18:22.952 --- 10.0.0.1 ping statistics --- 00:18:22.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.952 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:22.952 00:30:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.952 00:30:38 -- nvmf/common.sh@421 -- # return 0 00:18:22.952 00:30:38 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:22.952 00:30:38 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:23.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.209 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:23.209 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:23.209 00:30:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.209 00:30:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:23.209 00:30:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:23.209 00:30:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.209 00:30:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:23.209 00:30:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:23.209 00:30:39 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:23.209 00:30:39 -- target/dif.sh@137 -- # nvmfappstart 00:18:23.209 00:30:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:23.209 00:30:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:23.209 00:30:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.209 00:30:39 -- nvmf/common.sh@469 -- # nvmfpid=74133 00:18:23.209 00:30:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:23.209 00:30:39 -- nvmf/common.sh@470 -- # waitforlisten 74133 00:18:23.209 00:30:39 -- common/autotest_common.sh@819 -- # '[' -z 74133 ']' 00:18:23.209 00:30:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.209 00:30:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:23.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.209 00:30:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.209 00:30:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:23.209 00:30:39 -- common/autotest_common.sh@10 -- # set +x 00:18:23.468 [2024-09-29 00:30:39.085932] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:18:23.468 [2024-09-29 00:30:39.086223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.468 [2024-09-29 00:30:39.226083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.468 [2024-09-29 00:30:39.278891] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:23.468 [2024-09-29 00:30:39.279259] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.468 [2024-09-29 00:30:39.279425] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.468 [2024-09-29 00:30:39.279597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.468 [2024-09-29 00:30:39.279707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.404 00:30:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:24.404 00:30:40 -- common/autotest_common.sh@852 -- # return 0 00:18:24.404 00:30:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:24.404 00:30:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 00:30:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.404 00:30:40 -- target/dif.sh@139 -- # create_transport 00:18:24.404 00:30:40 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:24.404 00:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 [2024-09-29 00:30:40.136747] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.404 00:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.404 00:30:40 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:24.404 00:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:24.404 00:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 ************************************ 00:18:24.404 START TEST fio_dif_1_default 00:18:24.404 ************************************ 00:18:24.404 00:30:40 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:18:24.404 00:30:40 -- target/dif.sh@86 -- # create_subsystems 0 00:18:24.404 00:30:40 -- target/dif.sh@28 -- # local sub 00:18:24.404 00:30:40 -- target/dif.sh@30 -- # for sub in "$@" 00:18:24.404 00:30:40 -- target/dif.sh@31 -- # create_subsystem 0 00:18:24.404 00:30:40 -- target/dif.sh@18 -- # local sub_id=0 00:18:24.404 00:30:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:24.404 00:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 bdev_null0 00:18:24.404 00:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.404 00:30:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:24.404 00:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 00:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.404 00:30:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:24.404 00:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 00:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.404 00:30:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:24.404 00:30:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:24.404 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:18:24.404 [2024-09-29 00:30:40.182435] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.404 00:30:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:24.404 00:30:40 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:24.404 00:30:40 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:24.404 00:30:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:24.404 00:30:40 -- nvmf/common.sh@520 -- # config=() 00:18:24.404 00:30:40 -- nvmf/common.sh@520 -- # local subsystem config 00:18:24.404 00:30:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:24.404 00:30:40 -- target/dif.sh@82 -- # gen_fio_conf 00:18:24.404 00:30:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:24.404 00:30:40 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:24.404 00:30:40 -- target/dif.sh@54 -- # local file 00:18:24.404 00:30:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:24.404 { 00:18:24.404 "params": { 00:18:24.404 "name": "Nvme$subsystem", 00:18:24.404 "trtype": "$TEST_TRANSPORT", 00:18:24.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:24.404 "adrfam": "ipv4", 00:18:24.404 "trsvcid": "$NVMF_PORT", 00:18:24.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:24.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:24.404 "hdgst": ${hdgst:-false}, 00:18:24.404 "ddgst": ${ddgst:-false} 00:18:24.404 }, 00:18:24.404 "method": "bdev_nvme_attach_controller" 00:18:24.404 } 00:18:24.404 EOF 00:18:24.404 )") 00:18:24.404 00:30:40 -- target/dif.sh@56 -- # cat 00:18:24.404 00:30:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:24.404 00:30:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.404 00:30:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:24.404 00:30:40 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.404 00:30:40 -- common/autotest_common.sh@1320 -- # shift 00:18:24.404 00:30:40 -- nvmf/common.sh@542 -- # cat 00:18:24.404 00:30:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:24.404 00:30:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.404 00:30:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:24.404 00:30:40 -- target/dif.sh@72 -- # (( file <= files )) 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:24.404 00:30:40 -- nvmf/common.sh@544 -- # jq . 00:18:24.404 00:30:40 -- nvmf/common.sh@545 -- # IFS=, 00:18:24.404 00:30:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:24.404 "params": { 00:18:24.404 "name": "Nvme0", 00:18:24.404 "trtype": "tcp", 00:18:24.404 "traddr": "10.0.0.2", 00:18:24.404 "adrfam": "ipv4", 00:18:24.404 "trsvcid": "4420", 00:18:24.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:24.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:24.404 "hdgst": false, 00:18:24.404 "ddgst": false 00:18:24.404 }, 00:18:24.404 "method": "bdev_nvme_attach_controller" 00:18:24.404 }' 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:24.404 00:30:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:24.404 00:30:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:24.404 00:30:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:24.663 00:30:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:24.663 00:30:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:24.663 00:30:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:24.663 00:30:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:24.663 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:24.663 fio-3.35 00:18:24.663 Starting 1 thread 00:18:24.922 [2024-09-29 00:30:40.731511] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:24.922 [2024-09-29 00:30:40.731579] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:37.127 00:18:37.127 filename0: (groupid=0, jobs=1): err= 0: pid=74205: Sun Sep 29 00:30:50 2024 00:18:37.127 read: IOPS=9330, BW=36.4MiB/s (38.2MB/s)(365MiB/10001msec) 00:18:37.127 slat (nsec): min=5742, max=82950, avg=8098.17, stdev=3981.55 00:18:37.127 clat (usec): min=325, max=4233, avg=404.70, stdev=51.70 00:18:37.127 lat (usec): min=331, max=4272, avg=412.80, stdev=52.47 00:18:37.127 clat percentiles (usec): 00:18:37.127 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:18:37.127 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 396], 60.00th=[ 408], 00:18:37.127 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 465], 95.00th=[ 490], 00:18:37.127 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 562], 99.95th=[ 578], 00:18:37.127 | 99.99th=[ 914] 00:18:37.127 bw ( KiB/s): min=35776, max=39552, per=100.00%, avg=37327.16, stdev=932.24, samples=19 00:18:37.127 iops : min= 8944, max= 9888, avg=9331.79, stdev=233.06, samples=19 00:18:37.127 lat (usec) : 500=96.54%, 750=3.45%, 1000=0.01% 00:18:37.127 lat (msec) : 2=0.01%, 10=0.01% 00:18:37.127 cpu : usr=85.64%, sys=12.18%, ctx=30, majf=0, minf=9 00:18:37.127 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.127 issued rwts: total=93316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.127 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:37.127 00:18:37.127 Run status group 0 (all jobs): 00:18:37.127 READ: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=365MiB (382MB), run=10001-10001msec 00:18:37.127 00:30:51 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:37.127 00:30:51 -- target/dif.sh@43 -- # local sub 00:18:37.127 00:30:51 -- target/dif.sh@45 -- # for sub in "$@" 00:18:37.127 00:30:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:37.127 00:30:51 -- target/dif.sh@36 -- # local sub_id=0 00:18:37.127 00:30:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 ************************************ 00:18:37.127 END TEST fio_dif_1_default 00:18:37.127 ************************************ 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:18:37.127 real 0m10.888s 00:18:37.127 user 0m9.146s 00:18:37.127 sys 0m1.437s 00:18:37.127 00:30:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:37.127 00:30:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:37.127 00:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 ************************************ 00:18:37.127 START TEST fio_dif_1_multi_subsystems 00:18:37.127 ************************************ 00:18:37.127 00:30:51 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:18:37.127 00:30:51 -- target/dif.sh@92 -- # local files=1 00:18:37.127 00:30:51 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:37.127 00:30:51 -- target/dif.sh@28 -- # local sub 00:18:37.127 00:30:51 -- target/dif.sh@30 -- # for sub in "$@" 00:18:37.127 00:30:51 -- target/dif.sh@31 -- # create_subsystem 0 00:18:37.127 00:30:51 -- target/dif.sh@18 -- # local sub_id=0 00:18:37.127 00:30:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 bdev_null0 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 [2024-09-29 00:30:51.129631] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@30 -- # for sub in "$@" 00:18:37.127 00:30:51 -- target/dif.sh@31 -- # create_subsystem 1 00:18:37.127 00:30:51 -- target/dif.sh@18 -- # local sub_id=1 00:18:37.127 00:30:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 bdev_null1 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.127 00:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.127 00:30:51 -- common/autotest_common.sh@10 -- # set +x 00:18:37.127 00:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.127 00:30:51 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:37.127 00:30:51 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:37.127 00:30:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:37.127 00:30:51 -- nvmf/common.sh@520 -- # config=() 00:18:37.127 00:30:51 -- nvmf/common.sh@520 -- # local subsystem config 00:18:37.127 00:30:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:37.127 00:30:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:37.127 00:30:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:37.127 { 00:18:37.127 "params": { 00:18:37.127 "name": "Nvme$subsystem", 00:18:37.127 "trtype": "$TEST_TRANSPORT", 00:18:37.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.127 "adrfam": "ipv4", 00:18:37.127 "trsvcid": "$NVMF_PORT", 00:18:37.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.127 "hdgst": ${hdgst:-false}, 00:18:37.127 "ddgst": ${ddgst:-false} 00:18:37.127 }, 00:18:37.127 "method": "bdev_nvme_attach_controller" 00:18:37.127 } 00:18:37.127 EOF 00:18:37.127 )") 00:18:37.127 00:30:51 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:37.127 00:30:51 -- target/dif.sh@82 -- # gen_fio_conf 00:18:37.127 00:30:51 -- target/dif.sh@54 -- # local file 00:18:37.127 00:30:51 -- target/dif.sh@56 -- # cat 00:18:37.127 00:30:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:37.127 00:30:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:37.127 00:30:51 -- nvmf/common.sh@542 -- # cat 00:18:37.127 00:30:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:37.127 00:30:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.127 00:30:51 -- common/autotest_common.sh@1320 -- # shift 00:18:37.127 00:30:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:37.127 00:30:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:37.127 00:30:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:37.127 00:30:51 -- target/dif.sh@72 -- # (( file <= files )) 00:18:37.128 00:30:51 -- target/dif.sh@73 -- # cat 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.128 00:30:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:37.128 00:30:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:37.128 { 00:18:37.128 "params": { 00:18:37.128 "name": "Nvme$subsystem", 00:18:37.128 "trtype": "$TEST_TRANSPORT", 00:18:37.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.128 "adrfam": "ipv4", 00:18:37.128 "trsvcid": "$NVMF_PORT", 00:18:37.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.128 "hdgst": ${hdgst:-false}, 00:18:37.128 "ddgst": ${ddgst:-false} 00:18:37.128 }, 00:18:37.128 "method": "bdev_nvme_attach_controller" 00:18:37.128 } 00:18:37.128 EOF 00:18:37.128 )") 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:37.128 00:30:51 -- nvmf/common.sh@542 -- # cat 00:18:37.128 00:30:51 -- target/dif.sh@72 -- # (( file++ )) 00:18:37.128 00:30:51 -- target/dif.sh@72 -- # (( file <= files )) 00:18:37.128 00:30:51 -- nvmf/common.sh@544 -- # jq . 00:18:37.128 00:30:51 -- nvmf/common.sh@545 -- # IFS=, 00:18:37.128 00:30:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:37.128 "params": { 00:18:37.128 "name": "Nvme0", 00:18:37.128 "trtype": "tcp", 00:18:37.128 "traddr": "10.0.0.2", 00:18:37.128 "adrfam": "ipv4", 00:18:37.128 "trsvcid": "4420", 00:18:37.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:37.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:37.128 "hdgst": false, 00:18:37.128 "ddgst": false 00:18:37.128 }, 00:18:37.128 "method": "bdev_nvme_attach_controller" 00:18:37.128 },{ 00:18:37.128 "params": { 00:18:37.128 "name": "Nvme1", 00:18:37.128 "trtype": "tcp", 00:18:37.128 "traddr": "10.0.0.2", 00:18:37.128 "adrfam": "ipv4", 00:18:37.128 "trsvcid": "4420", 00:18:37.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.128 "hdgst": false, 00:18:37.128 "ddgst": false 00:18:37.128 }, 00:18:37.128 "method": "bdev_nvme_attach_controller" 00:18:37.128 }' 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:37.128 00:30:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:37.128 00:30:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:37.128 00:30:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:37.128 00:30:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:37.128 00:30:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:37.128 00:30:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:37.128 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:37.128 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:37.128 fio-3.35 00:18:37.128 Starting 2 threads 00:18:37.128 [2024-09-29 00:30:51.804770] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:37.128 [2024-09-29 00:30:51.804864] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:47.105 00:18:47.105 filename0: (groupid=0, jobs=1): err= 0: pid=74365: Sun Sep 29 00:31:01 2024 00:18:47.105 read: IOPS=4996, BW=19.5MiB/s (20.5MB/s)(195MiB/10001msec) 00:18:47.105 slat (nsec): min=6506, max=79035, avg=13349.03, stdev=5460.07 00:18:47.105 clat (usec): min=580, max=1277, avg=764.28, stdev=67.65 00:18:47.105 lat (usec): min=587, max=1291, avg=777.63, stdev=68.78 00:18:47.105 clat percentiles (usec): 00:18:47.105 | 1.00th=[ 635], 5.00th=[ 668], 10.00th=[ 685], 20.00th=[ 709], 00:18:47.105 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:18:47.105 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:18:47.105 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1106], 99.95th=[ 1156], 00:18:47.105 | 99.99th=[ 1237] 00:18:47.105 bw ( KiB/s): min=19456, max=20544, per=50.10%, avg=20026.95, stdev=302.13, samples=19 00:18:47.105 iops : min= 4864, max= 5136, avg=5006.74, stdev=75.53, samples=19 00:18:47.105 lat (usec) : 750=45.73%, 1000=53.98% 00:18:47.105 lat (msec) : 2=0.29% 00:18:47.105 cpu : usr=89.67%, sys=8.70%, ctx=10, majf=0, minf=0 00:18:47.105 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.105 issued rwts: total=49972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.105 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:47.105 filename1: (groupid=0, jobs=1): err= 0: pid=74366: Sun Sep 29 00:31:01 2024 00:18:47.105 read: IOPS=4996, BW=19.5MiB/s (20.5MB/s)(195MiB/10001msec) 00:18:47.105 slat (nsec): min=6451, max=77934, avg=13325.42, stdev=5396.29 00:18:47.105 clat (usec): min=606, max=1352, avg=764.05, stdev=62.22 00:18:47.105 lat (usec): min=631, max=1382, avg=777.37, stdev=62.88 00:18:47.105 clat percentiles (usec): 00:18:47.105 | 1.00th=[ 660], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 709], 00:18:47.105 | 30.00th=[ 725], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 775], 00:18:47.105 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:18:47.105 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 1106], 99.95th=[ 1156], 00:18:47.105 | 99.99th=[ 1319] 00:18:47.105 bw ( KiB/s): min=19456, max=20544, per=50.10%, avg=20026.95, stdev=304.38, samples=19 00:18:47.105 iops : min= 4864, max= 5136, avg=5006.74, stdev=76.09, samples=19 00:18:47.105 lat (usec) : 750=47.13%, 1000=52.59% 00:18:47.105 lat (msec) : 2=0.27% 00:18:47.105 cpu : usr=90.54%, sys=7.94%, ctx=9, majf=0, minf=0 00:18:47.105 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.105 issued rwts: total=49972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.105 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:47.105 00:18:47.105 Run status group 0 (all jobs): 00:18:47.106 READ: bw=39.0MiB/s (40.9MB/s), 19.5MiB/s-19.5MiB/s (20.5MB/s-20.5MB/s), io=390MiB (409MB), run=10001-10001msec 00:18:47.106 00:31:02 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:47.106 00:31:02 -- target/dif.sh@43 -- # local sub 00:18:47.106 00:31:02 -- target/dif.sh@45 -- # for sub in "$@" 00:18:47.106 00:31:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:47.106 00:31:02 -- target/dif.sh@36 -- # local sub_id=0 00:18:47.106 00:31:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@45 -- # for sub in "$@" 00:18:47.106 00:31:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:47.106 00:31:02 -- target/dif.sh@36 -- # local sub_id=1 00:18:47.106 00:31:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 ************************************ 00:18:47.106 END TEST fio_dif_1_multi_subsystems 00:18:47.106 ************************************ 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:18:47.106 real 0m11.035s 00:18:47.106 user 0m18.734s 00:18:47.106 sys 0m1.911s 00:18:47.106 00:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 00:31:02 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:47.106 00:31:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:47.106 00:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 ************************************ 00:18:47.106 START TEST fio_dif_rand_params 00:18:47.106 ************************************ 00:18:47.106 00:31:02 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:18:47.106 00:31:02 -- target/dif.sh@100 -- # local NULL_DIF 00:18:47.106 00:31:02 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:47.106 00:31:02 -- target/dif.sh@103 -- # NULL_DIF=3 00:18:47.106 00:31:02 -- target/dif.sh@103 -- # bs=128k 00:18:47.106 00:31:02 -- target/dif.sh@103 -- # numjobs=3 00:18:47.106 00:31:02 -- target/dif.sh@103 -- # iodepth=3 00:18:47.106 00:31:02 -- target/dif.sh@103 -- # runtime=5 00:18:47.106 00:31:02 -- target/dif.sh@105 -- # create_subsystems 0 00:18:47.106 00:31:02 -- target/dif.sh@28 -- # local sub 00:18:47.106 00:31:02 -- target/dif.sh@30 -- # for sub in "$@" 00:18:47.106 00:31:02 -- target/dif.sh@31 -- # create_subsystem 0 00:18:47.106 00:31:02 -- target/dif.sh@18 -- # local sub_id=0 00:18:47.106 00:31:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 bdev_null0 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:47.106 00:31:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:47.106 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.106 [2024-09-29 00:31:02.220471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.106 00:31:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:47.106 00:31:02 -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:47.106 00:31:02 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:47.106 00:31:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:47.106 00:31:02 -- nvmf/common.sh@520 -- # config=() 00:18:47.106 00:31:02 -- nvmf/common.sh@520 -- # local subsystem config 00:18:47.106 00:31:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:47.106 00:31:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:47.106 00:31:02 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:47.106 00:31:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:47.106 { 00:18:47.106 "params": { 00:18:47.106 "name": "Nvme$subsystem", 00:18:47.106 "trtype": "$TEST_TRANSPORT", 00:18:47.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.106 "adrfam": "ipv4", 00:18:47.106 "trsvcid": "$NVMF_PORT", 00:18:47.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.106 "hdgst": ${hdgst:-false}, 00:18:47.106 "ddgst": ${ddgst:-false} 00:18:47.106 }, 00:18:47.106 "method": "bdev_nvme_attach_controller" 00:18:47.106 } 00:18:47.106 EOF 00:18:47.106 )") 00:18:47.106 00:31:02 -- target/dif.sh@82 -- # gen_fio_conf 00:18:47.106 00:31:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:47.106 00:31:02 -- target/dif.sh@54 -- # local file 00:18:47.106 00:31:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.106 00:31:02 -- target/dif.sh@56 -- # cat 00:18:47.106 00:31:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:47.106 00:31:02 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.106 00:31:02 -- common/autotest_common.sh@1320 -- # shift 00:18:47.106 00:31:02 -- nvmf/common.sh@542 -- # cat 00:18:47.106 00:31:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:47.106 00:31:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.106 00:31:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:47.106 00:31:02 -- target/dif.sh@72 -- # (( file <= files )) 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:47.106 00:31:02 -- nvmf/common.sh@544 -- # jq . 00:18:47.106 00:31:02 -- nvmf/common.sh@545 -- # IFS=, 00:18:47.106 00:31:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:47.106 "params": { 00:18:47.106 "name": "Nvme0", 00:18:47.106 "trtype": "tcp", 00:18:47.106 "traddr": "10.0.0.2", 00:18:47.106 "adrfam": "ipv4", 00:18:47.106 "trsvcid": "4420", 00:18:47.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:47.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:47.106 "hdgst": false, 00:18:47.106 "ddgst": false 00:18:47.106 }, 00:18:47.106 "method": "bdev_nvme_attach_controller" 00:18:47.106 }' 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:47.106 00:31:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:47.106 00:31:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:47.106 00:31:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:47.106 00:31:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:47.106 00:31:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:47.106 00:31:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:47.106 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:47.106 ... 00:18:47.106 fio-3.35 00:18:47.106 Starting 3 threads 00:18:47.106 [2024-09-29 00:31:02.770721] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:47.106 [2024-09-29 00:31:02.770813] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:52.370 00:18:52.371 filename0: (groupid=0, jobs=1): err= 0: pid=74523: Sun Sep 29 00:31:07 2024 00:18:52.371 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5004msec) 00:18:52.371 slat (nsec): min=4631, max=74819, avg=10551.24, stdev=5616.40 00:18:52.371 clat (usec): min=10252, max=12279, avg=11142.90, stdev=420.52 00:18:52.371 lat (usec): min=10260, max=12298, avg=11153.45, stdev=420.81 00:18:52.371 clat percentiles (usec): 00:18:52.371 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:18:52.371 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:18:52.371 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:18:52.371 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12256], 99.95th=[12256], 00:18:52.371 | 99.99th=[12256] 00:18:52.371 bw ( KiB/s): min=33024, max=35328, per=33.18%, avg=34218.67, stdev=677.31, samples=9 00:18:52.371 iops : min= 258, max= 276, avg=267.33, stdev= 5.29, samples=9 00:18:52.371 lat (msec) : 20=100.00% 00:18:52.371 cpu : usr=90.71%, sys=8.22%, ctx=37, majf=0, minf=0 00:18:52.371 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.371 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.371 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:52.371 filename0: (groupid=0, jobs=1): err= 0: pid=74524: Sun Sep 29 00:31:07 2024 00:18:52.371 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5004msec) 00:18:52.371 slat (nsec): min=6791, max=74841, avg=14695.71, stdev=4815.03 00:18:52.371 clat (usec): min=9166, max=13507, avg=11135.77, stdev=443.38 00:18:52.371 lat (usec): min=9182, max=13532, avg=11150.47, stdev=443.59 00:18:52.371 clat percentiles (usec): 00:18:52.371 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:18:52.371 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:18:52.371 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:18:52.371 | 99.00th=[11994], 99.50th=[11994], 99.90th=[13435], 99.95th=[13566], 00:18:52.371 | 99.99th=[13566] 00:18:52.371 bw ( KiB/s): min=33024, max=36096, per=33.18%, avg=34218.67, stdev=949.27, samples=9 00:18:52.371 iops : min= 258, max= 282, avg=267.33, stdev= 7.42, samples=9 00:18:52.371 lat (msec) : 10=0.22%, 20=99.78% 00:18:52.371 cpu : usr=92.00%, sys=7.24%, ctx=38, majf=0, minf=0 00:18:52.371 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.371 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.371 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:52.371 filename0: (groupid=0, jobs=1): err= 0: pid=74525: Sun Sep 29 00:31:07 2024 00:18:52.371 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5004msec) 00:18:52.371 slat (nsec): min=7087, max=75202, avg=14024.21, stdev=4552.12 00:18:52.371 clat (usec): min=9145, max=13594, avg=11138.42, stdev=444.85 00:18:52.371 lat (usec): min=9158, max=13619, avg=11152.44, stdev=444.98 00:18:52.371 clat percentiles (usec): 00:18:52.371 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:18:52.371 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:18:52.371 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:18:52.371 | 99.00th=[11994], 99.50th=[11994], 99.90th=[13566], 99.95th=[13566], 00:18:52.371 | 99.99th=[13566] 00:18:52.371 bw ( KiB/s): min=33024, max=36096, per=33.18%, avg=34218.67, stdev=949.27, samples=9 00:18:52.371 iops : min= 258, max= 282, avg=267.33, stdev= 7.42, samples=9 00:18:52.371 lat (msec) : 10=0.22%, 20=99.78% 00:18:52.371 cpu : usr=91.92%, sys=7.36%, ctx=3, majf=0, minf=0 00:18:52.371 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.371 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.371 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:52.371 00:18:52.371 Run status group 0 (all jobs): 00:18:52.371 READ: bw=101MiB/s (106MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=504MiB (528MB), run=5004-5004msec 00:18:52.371 00:31:08 -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:52.371 00:31:08 -- target/dif.sh@43 -- # local sub 00:18:52.371 00:31:08 -- target/dif.sh@45 -- # for sub in "$@" 00:18:52.371 00:31:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:52.371 00:31:08 -- target/dif.sh@36 -- # local sub_id=0 00:18:52.371 00:31:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@109 -- # NULL_DIF=2 00:18:52.371 00:31:08 -- target/dif.sh@109 -- # bs=4k 00:18:52.371 00:31:08 -- target/dif.sh@109 -- # numjobs=8 00:18:52.371 00:31:08 -- target/dif.sh@109 -- # iodepth=16 00:18:52.371 00:31:08 -- target/dif.sh@109 -- # runtime= 00:18:52.371 00:31:08 -- target/dif.sh@109 -- # files=2 00:18:52.371 00:31:08 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:52.371 00:31:08 -- target/dif.sh@28 -- # local sub 00:18:52.371 00:31:08 -- target/dif.sh@30 -- # for sub in "$@" 00:18:52.371 00:31:08 -- target/dif.sh@31 -- # create_subsystem 0 00:18:52.371 00:31:08 -- target/dif.sh@18 -- # local sub_id=0 00:18:52.371 00:31:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 bdev_null0 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 [2024-09-29 00:31:08.121816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@30 -- # for sub in "$@" 00:18:52.371 00:31:08 -- target/dif.sh@31 -- # create_subsystem 1 00:18:52.371 00:31:08 -- target/dif.sh@18 -- # local sub_id=1 00:18:52.371 00:31:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 bdev_null1 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@30 -- # for sub in "$@" 00:18:52.371 00:31:08 -- target/dif.sh@31 -- # create_subsystem 2 00:18:52.371 00:31:08 -- target/dif.sh@18 -- # local sub_id=2 00:18:52.371 00:31:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 bdev_null2 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:52.371 00:31:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:52.371 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 00:31:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:52.371 00:31:08 -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:52.371 00:31:08 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:52.371 00:31:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:52.371 00:31:08 -- nvmf/common.sh@520 -- # config=() 00:18:52.371 00:31:08 -- nvmf/common.sh@520 -- # local subsystem config 00:18:52.371 00:31:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.371 00:31:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:52.371 00:31:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.371 { 00:18:52.371 "params": { 00:18:52.372 "name": "Nvme$subsystem", 00:18:52.372 "trtype": "$TEST_TRANSPORT", 00:18:52.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.372 "adrfam": "ipv4", 00:18:52.372 "trsvcid": "$NVMF_PORT", 00:18:52.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.372 "hdgst": ${hdgst:-false}, 00:18:52.372 "ddgst": ${ddgst:-false} 00:18:52.372 }, 00:18:52.372 "method": "bdev_nvme_attach_controller" 00:18:52.372 } 00:18:52.372 EOF 00:18:52.372 )") 00:18:52.372 00:31:08 -- target/dif.sh@82 -- # gen_fio_conf 00:18:52.372 00:31:08 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:52.372 00:31:08 -- target/dif.sh@54 -- # local file 00:18:52.372 00:31:08 -- target/dif.sh@56 -- # cat 00:18:52.372 00:31:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:52.372 00:31:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.372 00:31:08 -- nvmf/common.sh@542 -- # cat 00:18:52.372 00:31:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:52.372 00:31:08 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.372 00:31:08 -- common/autotest_common.sh@1320 -- # shift 00:18:52.372 00:31:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:52.372 00:31:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.372 00:31:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:52.372 00:31:08 -- target/dif.sh@72 -- # (( file <= files )) 00:18:52.372 00:31:08 -- target/dif.sh@73 -- # cat 00:18:52.372 00:31:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.372 00:31:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.372 00:31:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.372 { 00:18:52.372 "params": { 00:18:52.372 "name": "Nvme$subsystem", 00:18:52.372 "trtype": "$TEST_TRANSPORT", 00:18:52.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.372 "adrfam": "ipv4", 00:18:52.372 "trsvcid": "$NVMF_PORT", 00:18:52.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.372 "hdgst": ${hdgst:-false}, 00:18:52.372 "ddgst": ${ddgst:-false} 00:18:52.372 }, 00:18:52.372 "method": "bdev_nvme_attach_controller" 00:18:52.372 } 00:18:52.372 EOF 00:18:52.372 )") 00:18:52.372 00:31:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:52.372 00:31:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:52.372 00:31:08 -- nvmf/common.sh@542 -- # cat 00:18:52.372 00:31:08 -- target/dif.sh@72 -- # (( file++ )) 00:18:52.372 00:31:08 -- target/dif.sh@72 -- # (( file <= files )) 00:18:52.372 00:31:08 -- target/dif.sh@73 -- # cat 00:18:52.372 00:31:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:52.372 00:31:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:52.372 { 00:18:52.372 "params": { 00:18:52.372 "name": "Nvme$subsystem", 00:18:52.372 "trtype": "$TEST_TRANSPORT", 00:18:52.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.372 "adrfam": "ipv4", 00:18:52.372 "trsvcid": "$NVMF_PORT", 00:18:52.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.372 "hdgst": ${hdgst:-false}, 00:18:52.372 "ddgst": ${ddgst:-false} 00:18:52.372 }, 00:18:52.372 "method": "bdev_nvme_attach_controller" 00:18:52.372 } 00:18:52.372 EOF 00:18:52.372 )") 00:18:52.372 00:31:08 -- target/dif.sh@72 -- # (( file++ )) 00:18:52.372 00:31:08 -- target/dif.sh@72 -- # (( file <= files )) 00:18:52.372 00:31:08 -- nvmf/common.sh@542 -- # cat 00:18:52.630 00:31:08 -- nvmf/common.sh@544 -- # jq . 00:18:52.630 00:31:08 -- nvmf/common.sh@545 -- # IFS=, 00:18:52.630 00:31:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:52.630 "params": { 00:18:52.630 "name": "Nvme0", 00:18:52.630 "trtype": "tcp", 00:18:52.630 "traddr": "10.0.0.2", 00:18:52.630 "adrfam": "ipv4", 00:18:52.630 "trsvcid": "4420", 00:18:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:52.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:52.630 "hdgst": false, 00:18:52.630 "ddgst": false 00:18:52.630 }, 00:18:52.630 "method": "bdev_nvme_attach_controller" 00:18:52.630 },{ 00:18:52.630 "params": { 00:18:52.630 "name": "Nvme1", 00:18:52.630 "trtype": "tcp", 00:18:52.630 "traddr": "10.0.0.2", 00:18:52.630 "adrfam": "ipv4", 00:18:52.630 "trsvcid": "4420", 00:18:52.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.630 "hdgst": false, 00:18:52.630 "ddgst": false 00:18:52.630 }, 00:18:52.630 "method": "bdev_nvme_attach_controller" 00:18:52.630 },{ 00:18:52.630 "params": { 00:18:52.630 "name": "Nvme2", 00:18:52.630 "trtype": "tcp", 00:18:52.630 "traddr": "10.0.0.2", 00:18:52.630 "adrfam": "ipv4", 00:18:52.630 "trsvcid": "4420", 00:18:52.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:52.631 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:52.631 "hdgst": false, 00:18:52.631 "ddgst": false 00:18:52.631 }, 00:18:52.631 "method": "bdev_nvme_attach_controller" 00:18:52.631 }' 00:18:52.631 00:31:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:52.631 00:31:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:52.631 00:31:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.631 00:31:08 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.631 00:31:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:52.631 00:31:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:52.631 00:31:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:52.631 00:31:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:52.631 00:31:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:52.631 00:31:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:52.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:52.631 ... 00:18:52.631 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:52.631 ... 00:18:52.631 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:52.631 ... 00:18:52.631 fio-3.35 00:18:52.631 Starting 24 threads 00:18:53.199 [2024-09-29 00:31:08.893293] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:53.199 [2024-09-29 00:31:08.893403] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:05.445 00:19:05.445 filename0: (groupid=0, jobs=1): err= 0: pid=74624: Sun Sep 29 00:31:19 2024 00:19:05.445 read: IOPS=203, BW=814KiB/s (834kB/s)(8164KiB/10027msec) 00:19:05.445 slat (usec): min=4, max=4032, avg=22.72, stdev=177.59 00:19:05.445 clat (msec): min=13, max=153, avg=78.47, stdev=25.54 00:19:05.445 lat (msec): min=13, max=153, avg=78.49, stdev=25.54 00:19:05.445 clat percentiles (msec): 00:19:05.445 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:19:05.445 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 84], 00:19:05.445 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 121], 00:19:05.445 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 155], 00:19:05.445 | 99.99th=[ 155] 00:19:05.445 bw ( KiB/s): min= 512, max= 1200, per=3.92%, avg=810.50, stdev=208.20, samples=20 00:19:05.445 iops : min= 128, max= 300, avg=202.60, stdev=52.01, samples=20 00:19:05.445 lat (msec) : 20=0.78%, 50=15.14%, 100=60.90%, 250=23.17% 00:19:05.445 cpu : usr=42.18%, sys=2.18%, ctx=1216, majf=0, minf=9 00:19:05.445 IO depths : 1=0.1%, 2=2.7%, 4=10.7%, 8=71.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:19:05.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 complete : 0=0.0%, 4=90.4%, 8=7.2%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.445 filename0: (groupid=0, jobs=1): err= 0: pid=74625: Sun Sep 29 00:31:19 2024 00:19:05.445 read: IOPS=212, BW=851KiB/s (872kB/s)(8516KiB/10005msec) 00:19:05.445 slat (usec): min=3, max=8027, avg=29.24, stdev=347.00 00:19:05.445 clat (msec): min=5, max=132, avg=75.04, stdev=23.25 00:19:05.445 lat (msec): min=5, max=132, avg=75.07, stdev=23.25 00:19:05.445 clat percentiles (msec): 00:19:05.445 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 52], 00:19:05.445 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 81], 00:19:05.445 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:19:05.445 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 133], 00:19:05.445 | 99.99th=[ 133] 00:19:05.445 bw ( KiB/s): min= 632, max= 1248, per=4.09%, avg=845.47, stdev=172.04, samples=19 00:19:05.445 iops : min= 158, max= 312, avg=211.37, stdev=43.01, samples=19 00:19:05.445 lat (msec) : 10=0.14%, 20=0.28%, 50=18.08%, 100=65.48%, 250=16.02% 00:19:05.445 cpu : usr=31.44%, sys=1.53%, ctx=919, majf=0, minf=9 00:19:05.445 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:05.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 issued rwts: total=2129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.445 filename0: (groupid=0, jobs=1): err= 0: pid=74626: Sun Sep 29 00:31:19 2024 00:19:05.445 read: IOPS=216, BW=864KiB/s (885kB/s)(8652KiB/10010msec) 00:19:05.445 slat (nsec): min=3632, max=38634, avg=14979.28, stdev=4875.77 00:19:05.445 clat (msec): min=22, max=132, avg=73.96, stdev=23.36 00:19:05.445 lat (msec): min=22, max=132, avg=73.98, stdev=23.36 00:19:05.445 clat percentiles (msec): 00:19:05.445 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 48], 00:19:05.445 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:19:05.445 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:19:05.445 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:19:05.445 | 99.99th=[ 133] 00:19:05.445 bw ( KiB/s): min= 640, max= 1256, per=4.16%, avg=860.00, stdev=189.47, samples=20 00:19:05.445 iops : min= 160, max= 314, avg=215.00, stdev=47.37, samples=20 00:19:05.445 lat (msec) : 50=22.65%, 100=62.00%, 250=15.35% 00:19:05.445 cpu : usr=33.79%, sys=1.71%, ctx=945, majf=0, minf=9 00:19:05.445 IO depths : 1=0.1%, 2=1.8%, 4=7.5%, 8=75.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:05.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.445 filename0: (groupid=0, jobs=1): err= 0: pid=74627: Sun Sep 29 00:31:19 2024 00:19:05.445 read: IOPS=222, BW=888KiB/s (909kB/s)(8892KiB/10012msec) 00:19:05.445 slat (usec): min=4, max=8030, avg=28.13, stdev=243.51 00:19:05.445 clat (msec): min=19, max=131, avg=71.95, stdev=21.53 00:19:05.445 lat (msec): min=19, max=131, avg=71.97, stdev=21.53 00:19:05.445 clat percentiles (msec): 00:19:05.445 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 51], 00:19:05.445 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 73], 00:19:05.445 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 108], 00:19:05.445 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:19:05.445 | 99.99th=[ 132] 00:19:05.445 bw ( KiB/s): min= 712, max= 1248, per=4.27%, avg=882.16, stdev=139.73, samples=19 00:19:05.445 iops : min= 178, max= 312, avg=220.53, stdev=34.93, samples=19 00:19:05.445 lat (msec) : 20=0.27%, 50=19.12%, 100=68.11%, 250=12.51% 00:19:05.445 cpu : usr=39.16%, sys=2.08%, ctx=1393, majf=0, minf=9 00:19:05.445 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:05.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.445 filename0: (groupid=0, jobs=1): err= 0: pid=74628: Sun Sep 29 00:31:19 2024 00:19:05.445 read: IOPS=212, BW=851KiB/s (872kB/s)(8524KiB/10011msec) 00:19:05.445 slat (usec): min=3, max=8027, avg=24.65, stdev=260.31 00:19:05.445 clat (msec): min=21, max=144, avg=75.05, stdev=23.88 00:19:05.445 lat (msec): min=21, max=144, avg=75.07, stdev=23.89 00:19:05.445 clat percentiles (msec): 00:19:05.445 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 50], 00:19:05.445 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:19:05.445 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 111], 00:19:05.445 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 144], 00:19:05.445 | 99.99th=[ 144] 00:19:05.445 bw ( KiB/s): min= 544, max= 1232, per=4.10%, avg=848.00, stdev=189.19, samples=20 00:19:05.445 iops : min= 136, max= 308, avg=212.00, stdev=47.30, samples=20 00:19:05.445 lat (msec) : 50=20.51%, 100=62.65%, 250=16.85% 00:19:05.445 cpu : usr=35.36%, sys=1.74%, ctx=987, majf=0, minf=10 00:19:05.445 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:05.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.445 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.445 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.445 filename0: (groupid=0, jobs=1): err= 0: pid=74629: Sun Sep 29 00:31:19 2024 00:19:05.445 read: IOPS=204, BW=818KiB/s (838kB/s)(8224KiB/10053msec) 00:19:05.445 slat (usec): min=4, max=8024, avg=36.44, stdev=413.83 00:19:05.445 clat (msec): min=5, max=156, avg=77.96, stdev=27.20 00:19:05.445 lat (msec): min=5, max=156, avg=77.99, stdev=27.21 00:19:05.445 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 47], 20.00th=[ 61], 00:19:05.446 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:19:05.446 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 121], 00:19:05.446 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 157], 00:19:05.446 | 99.99th=[ 157] 00:19:05.446 bw ( KiB/s): min= 512, max= 1408, per=3.94%, avg=816.00, stdev=231.59, samples=20 00:19:05.446 iops : min= 128, max= 352, avg=204.00, stdev=57.90, samples=20 00:19:05.446 lat (msec) : 10=1.46%, 20=1.65%, 50=13.23%, 100=62.89%, 250=20.77% 00:19:05.446 cpu : usr=35.03%, sys=1.65%, ctx=970, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=2.0%, 4=7.7%, 8=74.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=90.0%, 8=8.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename0: (groupid=0, jobs=1): err= 0: pid=74630: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=226, BW=904KiB/s (926kB/s)(9052KiB/10009msec) 00:19:05.446 slat (nsec): min=5086, max=85476, avg=15454.88, stdev=5322.82 00:19:05.446 clat (msec): min=17, max=129, avg=70.69, stdev=21.72 00:19:05.446 lat (msec): min=17, max=129, avg=70.71, stdev=21.72 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 51], 00:19:05.446 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 74], 00:19:05.446 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 107], 00:19:05.446 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:19:05.446 | 99.99th=[ 130] 00:19:05.446 bw ( KiB/s): min= 720, max= 1280, per=4.35%, avg=899.80, stdev=154.07, samples=20 00:19:05.446 iops : min= 180, max= 320, avg=224.95, stdev=38.52, samples=20 00:19:05.446 lat (msec) : 20=0.27%, 50=19.75%, 100=68.10%, 250=11.89% 00:19:05.446 cpu : usr=45.20%, sys=2.54%, ctx=1411, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename0: (groupid=0, jobs=1): err= 0: pid=74631: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=216, BW=864KiB/s (885kB/s)(8656KiB/10014msec) 00:19:05.446 slat (usec): min=4, max=8033, avg=27.93, stdev=310.49 00:19:05.446 clat (msec): min=19, max=139, avg=73.90, stdev=22.88 00:19:05.446 lat (msec): min=19, max=139, avg=73.92, stdev=22.89 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 52], 00:19:05.446 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:19:05.446 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 109], 00:19:05.446 | 99.00th=[ 121], 99.50th=[ 133], 99.90th=[ 133], 99.95th=[ 140], 00:19:05.446 | 99.99th=[ 140] 00:19:05.446 bw ( KiB/s): min= 640, max= 1192, per=4.15%, avg=859.75, stdev=160.94, samples=20 00:19:05.446 iops : min= 160, max= 298, avg=214.90, stdev=40.22, samples=20 00:19:05.446 lat (msec) : 20=0.28%, 50=18.95%, 100=65.25%, 250=15.53% 00:19:05.446 cpu : usr=33.74%, sys=1.78%, ctx=1136, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=1.0%, 4=4.2%, 8=79.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename1: (groupid=0, jobs=1): err= 0: pid=74632: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=212, BW=849KiB/s (869kB/s)(8492KiB/10006msec) 00:19:05.446 slat (usec): min=4, max=8037, avg=35.00, stdev=305.25 00:19:05.446 clat (msec): min=18, max=149, avg=75.20, stdev=24.66 00:19:05.446 lat (msec): min=18, max=149, avg=75.24, stdev=24.66 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 49], 00:19:05.446 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 84], 00:19:05.446 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 114], 00:19:05.446 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 150], 00:19:05.446 | 99.99th=[ 150] 00:19:05.446 bw ( KiB/s): min= 528, max= 1280, per=4.08%, avg=844.05, stdev=194.41, samples=20 00:19:05.446 iops : min= 132, max= 320, avg=211.00, stdev=48.60, samples=20 00:19:05.446 lat (msec) : 20=0.42%, 50=20.87%, 100=59.40%, 250=19.31% 00:19:05.446 cpu : usr=43.61%, sys=2.46%, ctx=1309, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=2.7%, 4=10.7%, 8=72.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=89.9%, 8=7.7%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename1: (groupid=0, jobs=1): err= 0: pid=74633: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=220, BW=882KiB/s (903kB/s)(8844KiB/10026msec) 00:19:05.446 slat (usec): min=4, max=8025, avg=26.73, stdev=307.77 00:19:05.446 clat (msec): min=21, max=128, avg=72.38, stdev=21.39 00:19:05.446 lat (msec): min=21, max=128, avg=72.41, stdev=21.40 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 52], 00:19:05.446 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:19:05.446 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:19:05.446 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 127], 00:19:05.446 | 99.99th=[ 129] 00:19:05.446 bw ( KiB/s): min= 664, max= 1160, per=4.25%, avg=879.70, stdev=155.34, samples=20 00:19:05.446 iops : min= 166, max= 290, avg=219.90, stdev=38.80, samples=20 00:19:05.446 lat (msec) : 50=18.50%, 100=68.07%, 250=13.43% 00:19:05.446 cpu : usr=40.16%, sys=1.99%, ctx=1309, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename1: (groupid=0, jobs=1): err= 0: pid=74634: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=229, BW=917KiB/s (939kB/s)(9172KiB/10003msec) 00:19:05.446 slat (usec): min=3, max=8034, avg=21.68, stdev=236.73 00:19:05.446 clat (msec): min=4, max=141, avg=69.71, stdev=22.06 00:19:05.446 lat (msec): min=4, max=141, avg=69.73, stdev=22.07 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:19:05.446 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:19:05.446 | 70.00th=[ 80], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 108], 00:19:05.446 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 142], 00:19:05.446 | 99.99th=[ 142] 00:19:05.446 bw ( KiB/s): min= 768, max= 1232, per=4.40%, avg=910.32, stdev=138.42, samples=19 00:19:05.446 iops : min= 192, max= 308, avg=227.58, stdev=34.61, samples=19 00:19:05.446 lat (msec) : 10=0.13%, 50=23.77%, 100=64.98%, 250=11.12% 00:19:05.446 cpu : usr=36.33%, sys=1.94%, ctx=1162, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename1: (groupid=0, jobs=1): err= 0: pid=74635: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=219, BW=877KiB/s (898kB/s)(8796KiB/10027msec) 00:19:05.446 slat (usec): min=8, max=9067, avg=28.17, stdev=320.80 00:19:05.446 clat (msec): min=13, max=137, avg=72.78, stdev=22.27 00:19:05.446 lat (msec): min=13, max=137, avg=72.81, stdev=22.28 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 53], 00:19:05.446 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:19:05.446 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:19:05.446 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 130], 99.95th=[ 132], 00:19:05.446 | 99.99th=[ 138] 00:19:05.446 bw ( KiB/s): min= 656, max= 1288, per=4.23%, avg=874.85, stdev=173.80, samples=20 00:19:05.446 iops : min= 164, max= 322, avg=218.70, stdev=43.44, samples=20 00:19:05.446 lat (msec) : 20=0.73%, 50=16.83%, 100=68.58%, 250=13.87% 00:19:05.446 cpu : usr=41.29%, sys=2.00%, ctx=1303, majf=0, minf=9 00:19:05.446 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:05.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.446 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.446 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.446 filename1: (groupid=0, jobs=1): err= 0: pid=74636: Sun Sep 29 00:31:19 2024 00:19:05.446 read: IOPS=214, BW=859KiB/s (879kB/s)(8636KiB/10057msec) 00:19:05.446 slat (usec): min=4, max=12028, avg=22.54, stdev=283.34 00:19:05.446 clat (usec): min=1595, max=146470, avg=74295.45, stdev=26279.04 00:19:05.446 lat (usec): min=1605, max=146480, avg=74317.99, stdev=26275.31 00:19:05.446 clat percentiles (msec): 00:19:05.446 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 45], 20.00th=[ 55], 00:19:05.446 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 82], 00:19:05.447 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:19:05.447 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:05.447 | 99.99th=[ 146] 00:19:05.447 bw ( KiB/s): min= 608, max= 1648, per=4.15%, avg=859.60, stdev=248.09, samples=20 00:19:05.447 iops : min= 152, max= 412, avg=214.90, stdev=62.02, samples=20 00:19:05.447 lat (msec) : 2=0.74%, 10=2.22%, 20=1.39%, 50=12.60%, 100=66.10% 00:19:05.447 lat (msec) : 250=16.95% 00:19:05.447 cpu : usr=37.47%, sys=1.79%, ctx=1215, majf=0, minf=9 00:19:05.447 IO depths : 1=0.2%, 2=1.0%, 4=3.6%, 8=78.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=88.9%, 8=10.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename1: (groupid=0, jobs=1): err= 0: pid=74637: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=209, BW=836KiB/s (856kB/s)(8400KiB/10047msec) 00:19:05.447 slat (usec): min=5, max=8025, avg=18.00, stdev=174.89 00:19:05.447 clat (msec): min=5, max=156, avg=76.35, stdev=25.59 00:19:05.447 lat (msec): min=5, max=156, avg=76.37, stdev=25.59 00:19:05.447 clat percentiles (msec): 00:19:05.447 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 46], 20.00th=[ 61], 00:19:05.447 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 84], 00:19:05.447 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 117], 00:19:05.447 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:19:05.447 | 99.99th=[ 157] 00:19:05.447 bw ( KiB/s): min= 624, max= 1513, per=4.03%, avg=833.25, stdev=227.65, samples=20 00:19:05.447 iops : min= 156, max= 378, avg=208.30, stdev=56.87, samples=20 00:19:05.447 lat (msec) : 10=1.52%, 20=1.52%, 50=12.43%, 100=66.95%, 250=17.57% 00:19:05.447 cpu : usr=33.42%, sys=1.68%, ctx=963, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=74.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=90.0%, 8=8.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename1: (groupid=0, jobs=1): err= 0: pid=74638: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=214, BW=858KiB/s (878kB/s)(8584KiB/10007msec) 00:19:05.447 slat (usec): min=3, max=12031, avg=23.65, stdev=311.74 00:19:05.447 clat (msec): min=22, max=144, avg=74.49, stdev=24.15 00:19:05.447 lat (msec): min=22, max=144, avg=74.52, stdev=24.15 00:19:05.447 clat percentiles (msec): 00:19:05.447 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 49], 00:19:05.447 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:19:05.447 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:19:05.447 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 144], 00:19:05.447 | 99.99th=[ 144] 00:19:05.447 bw ( KiB/s): min= 544, max= 1256, per=4.12%, avg=853.20, stdev=191.44, samples=20 00:19:05.447 iops : min= 136, max= 314, avg=213.30, stdev=47.86, samples=20 00:19:05.447 lat (msec) : 50=21.76%, 100=62.67%, 250=15.56% 00:19:05.447 cpu : usr=31.10%, sys=1.86%, ctx=913, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=88.7%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename1: (groupid=0, jobs=1): err= 0: pid=74639: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=217, BW=871KiB/s (892kB/s)(8716KiB/10005msec) 00:19:05.447 slat (usec): min=3, max=8030, avg=33.60, stdev=383.48 00:19:05.447 clat (msec): min=4, max=143, avg=73.33, stdev=21.64 00:19:05.447 lat (msec): min=4, max=143, avg=73.37, stdev=21.63 00:19:05.447 clat percentiles (msec): 00:19:05.447 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 57], 00:19:05.447 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:19:05.447 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:19:05.447 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 132], 99.95th=[ 132], 00:19:05.447 | 99.99th=[ 144] 00:19:05.447 bw ( KiB/s): min= 640, max= 1192, per=4.17%, avg=863.58, stdev=141.14, samples=19 00:19:05.447 iops : min= 160, max= 298, avg=215.89, stdev=35.29, samples=19 00:19:05.447 lat (msec) : 10=0.32%, 50=17.58%, 100=70.45%, 250=11.66% 00:19:05.447 cpu : usr=33.43%, sys=1.52%, ctx=958, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename2: (groupid=0, jobs=1): err= 0: pid=74640: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=210, BW=843KiB/s (863kB/s)(8452KiB/10027msec) 00:19:05.447 slat (usec): min=8, max=4034, avg=19.78, stdev=151.36 00:19:05.447 clat (msec): min=22, max=154, avg=75.78, stdev=24.30 00:19:05.447 lat (msec): min=22, max=154, avg=75.79, stdev=24.29 00:19:05.447 clat percentiles (msec): 00:19:05.447 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 53], 00:19:05.447 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 79], 00:19:05.447 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:19:05.447 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:19:05.447 | 99.99th=[ 155] 00:19:05.447 bw ( KiB/s): min= 528, max= 1216, per=4.06%, avg=840.50, stdev=198.37, samples=20 00:19:05.447 iops : min= 132, max= 304, avg=210.10, stdev=49.56, samples=20 00:19:05.447 lat (msec) : 50=18.13%, 100=63.23%, 250=18.65% 00:19:05.447 cpu : usr=37.20%, sys=1.94%, ctx=1198, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename2: (groupid=0, jobs=1): err= 0: pid=74641: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=216, BW=867KiB/s (887kB/s)(8700KiB/10039msec) 00:19:05.447 slat (usec): min=4, max=8030, avg=17.86, stdev=171.95 00:19:05.447 clat (msec): min=8, max=145, avg=73.71, stdev=22.99 00:19:05.447 lat (msec): min=8, max=145, avg=73.73, stdev=23.00 00:19:05.447 clat percentiles (msec): 00:19:05.447 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 57], 00:19:05.447 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:19:05.447 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:19:05.447 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 140], 99.95th=[ 146], 00:19:05.447 | 99.99th=[ 146] 00:19:05.447 bw ( KiB/s): min= 656, max= 1248, per=4.17%, avg=863.35, stdev=166.06, samples=20 00:19:05.447 iops : min= 164, max= 312, avg=215.80, stdev=41.54, samples=20 00:19:05.447 lat (msec) : 10=0.74%, 20=0.74%, 50=17.38%, 100=69.70%, 250=11.45% 00:19:05.447 cpu : usr=32.74%, sys=1.49%, ctx=940, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename2: (groupid=0, jobs=1): err= 0: pid=74642: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=223, BW=892KiB/s (914kB/s)(8968KiB/10052msec) 00:19:05.447 slat (usec): min=4, max=4026, avg=20.44, stdev=169.32 00:19:05.447 clat (msec): min=5, max=119, avg=71.60, stdev=23.18 00:19:05.447 lat (msec): min=5, max=119, avg=71.62, stdev=23.19 00:19:05.447 clat percentiles (msec): 00:19:05.447 | 1.00th=[ 8], 5.00th=[ 35], 10.00th=[ 45], 20.00th=[ 53], 00:19:05.447 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:19:05.447 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 108], 00:19:05.447 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 120], 99.95th=[ 120], 00:19:05.447 | 99.99th=[ 121] 00:19:05.447 bw ( KiB/s): min= 688, max= 1552, per=4.30%, avg=890.40, stdev=208.50, samples=20 00:19:05.447 iops : min= 172, max= 388, avg=222.60, stdev=52.13, samples=20 00:19:05.447 lat (msec) : 10=1.34%, 20=1.52%, 50=15.48%, 100=68.87%, 250=12.80% 00:19:05.447 cpu : usr=42.65%, sys=2.33%, ctx=1295, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename2: (groupid=0, jobs=1): err= 0: pid=74643: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=225, BW=901KiB/s (923kB/s)(9016KiB/10002msec) 00:19:05.447 slat (usec): min=4, max=4030, avg=21.54, stdev=161.53 00:19:05.447 clat (usec): min=1504, max=139175, avg=70871.62, stdev=25941.68 00:19:05.447 lat (usec): min=1511, max=139190, avg=70893.16, stdev=25952.26 00:19:05.447 clat percentiles (usec): 00:19:05.447 | 1.00th=[ 1926], 5.00th=[ 33817], 10.00th=[ 40109], 20.00th=[ 47973], 00:19:05.447 | 30.00th=[ 55837], 40.00th=[ 64226], 50.00th=[ 70779], 60.00th=[ 74974], 00:19:05.447 | 70.00th=[ 84411], 80.00th=[ 95945], 90.00th=[105382], 95.00th=[110625], 00:19:05.447 | 99.00th=[128451], 99.50th=[129500], 99.90th=[137364], 99.95th=[139461], 00:19:05.447 | 99.99th=[139461] 00:19:05.447 bw ( KiB/s): min= 528, max= 1269, per=4.22%, avg=873.11, stdev=193.32, samples=19 00:19:05.447 iops : min= 132, max= 317, avg=218.26, stdev=48.30, samples=19 00:19:05.447 lat (msec) : 2=1.33%, 4=1.06%, 50=21.87%, 100=61.05%, 250=14.69% 00:19:05.447 cpu : usr=42.07%, sys=2.21%, ctx=1224, majf=0, minf=9 00:19:05.447 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=78.3%, 16=15.0%, 32=0.0%, >=64=0.0% 00:19:05.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.447 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.447 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.447 filename2: (groupid=0, jobs=1): err= 0: pid=74644: Sun Sep 29 00:31:19 2024 00:19:05.447 read: IOPS=213, BW=854KiB/s (874kB/s)(8556KiB/10019msec) 00:19:05.448 slat (usec): min=6, max=12032, avg=30.13, stdev=396.78 00:19:05.448 clat (msec): min=21, max=133, avg=74.83, stdev=22.06 00:19:05.448 lat (msec): min=21, max=133, avg=74.86, stdev=22.07 00:19:05.448 clat percentiles (msec): 00:19:05.448 | 1.00th=[ 24], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 58], 00:19:05.448 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:19:05.448 | 70.00th=[ 87], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:19:05.448 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:19:05.448 | 99.99th=[ 133] 00:19:05.448 bw ( KiB/s): min= 640, max= 1144, per=4.11%, avg=849.20, stdev=143.11, samples=20 00:19:05.448 iops : min= 160, max= 286, avg=212.30, stdev=35.78, samples=20 00:19:05.448 lat (msec) : 50=17.30%, 100=68.07%, 250=14.63% 00:19:05.448 cpu : usr=31.36%, sys=1.60%, ctx=926, majf=0, minf=9 00:19:05.448 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:05.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.448 filename2: (groupid=0, jobs=1): err= 0: pid=74645: Sun Sep 29 00:31:19 2024 00:19:05.448 read: IOPS=206, BW=827KiB/s (846kB/s)(8288KiB/10027msec) 00:19:05.448 slat (usec): min=7, max=8033, avg=23.61, stdev=264.12 00:19:05.448 clat (msec): min=22, max=156, avg=77.28, stdev=25.08 00:19:05.448 lat (msec): min=22, max=156, avg=77.30, stdev=25.08 00:19:05.448 clat percentiles (msec): 00:19:05.448 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 58], 00:19:05.448 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:19:05.448 | 70.00th=[ 96], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 121], 00:19:05.448 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:19:05.448 | 99.99th=[ 157] 00:19:05.448 bw ( KiB/s): min= 496, max= 1168, per=3.97%, avg=822.90, stdev=204.29, samples=20 00:19:05.448 iops : min= 124, max= 292, avg=205.70, stdev=51.04, samples=20 00:19:05.448 lat (msec) : 50=16.46%, 100=63.03%, 250=20.51% 00:19:05.448 cpu : usr=35.20%, sys=1.87%, ctx=1000, majf=0, minf=9 00:19:05.448 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:05.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 complete : 0=0.0%, 4=89.6%, 8=8.8%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 issued rwts: total=2072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.448 filename2: (groupid=0, jobs=1): err= 0: pid=74646: Sun Sep 29 00:31:19 2024 00:19:05.448 read: IOPS=227, BW=909KiB/s (931kB/s)(9092KiB/10002msec) 00:19:05.448 slat (usec): min=4, max=8037, avg=22.05, stdev=200.34 00:19:05.448 clat (msec): min=2, max=140, avg=70.31, stdev=22.93 00:19:05.448 lat (msec): min=2, max=140, avg=70.33, stdev=22.93 00:19:05.448 clat percentiles (msec): 00:19:05.448 | 1.00th=[ 16], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 00:19:05.448 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 73], 00:19:05.448 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 108], 00:19:05.448 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 142], 00:19:05.448 | 99.99th=[ 142] 00:19:05.448 bw ( KiB/s): min= 656, max= 1269, per=4.33%, avg=896.74, stdev=161.44, samples=19 00:19:05.448 iops : min= 164, max= 317, avg=224.16, stdev=40.34, samples=19 00:19:05.448 lat (msec) : 4=0.70%, 10=0.13%, 20=0.66%, 50=22.13%, 100=63.40% 00:19:05.448 lat (msec) : 250=12.98% 00:19:05.448 cpu : usr=40.38%, sys=2.27%, ctx=1297, majf=0, minf=9 00:19:05.448 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:05.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.448 filename2: (groupid=0, jobs=1): err= 0: pid=74647: Sun Sep 29 00:31:19 2024 00:19:05.448 read: IOPS=214, BW=857KiB/s (877kB/s)(8584KiB/10022msec) 00:19:05.448 slat (usec): min=4, max=5027, avg=20.92, stdev=163.69 00:19:05.448 clat (msec): min=20, max=140, avg=74.56, stdev=23.85 00:19:05.448 lat (msec): min=20, max=140, avg=74.59, stdev=23.85 00:19:05.448 clat percentiles (msec): 00:19:05.448 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52], 00:19:05.448 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:19:05.448 | 70.00th=[ 90], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 115], 00:19:05.448 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 140], 00:19:05.448 | 99.99th=[ 140] 00:19:05.448 bw ( KiB/s): min= 528, max= 1200, per=4.13%, avg=854.40, stdev=196.55, samples=20 00:19:05.448 iops : min= 132, max= 300, avg=213.60, stdev=49.14, samples=20 00:19:05.448 lat (msec) : 50=19.20%, 100=62.49%, 250=18.31% 00:19:05.448 cpu : usr=40.89%, sys=2.23%, ctx=1238, majf=0, minf=9 00:19:05.448 IO depths : 1=0.1%, 2=1.4%, 4=5.4%, 8=77.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:05.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.448 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:05.448 00:19:05.448 Run status group 0 (all jobs): 00:19:05.448 READ: bw=20.2MiB/s (21.2MB/s), 814KiB/s-917KiB/s (834kB/s-939kB/s), io=203MiB (213MB), run=10002-10057msec 00:19:05.448 00:31:19 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:05.448 00:31:19 -- target/dif.sh@43 -- # local sub 00:19:05.448 00:31:19 -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.448 00:31:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:05.448 00:31:19 -- target/dif.sh@36 -- # local sub_id=0 00:19:05.448 00:31:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.448 00:31:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:05.448 00:31:19 -- target/dif.sh@36 -- # local sub_id=1 00:19:05.448 00:31:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@45 -- # for sub in "$@" 00:19:05.448 00:31:19 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:05.448 00:31:19 -- target/dif.sh@36 -- # local sub_id=2 00:19:05.448 00:31:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:05.448 00:31:19 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:05.448 00:31:19 -- target/dif.sh@115 -- # numjobs=2 00:19:05.448 00:31:19 -- target/dif.sh@115 -- # iodepth=8 00:19:05.448 00:31:19 -- target/dif.sh@115 -- # runtime=5 00:19:05.448 00:31:19 -- target/dif.sh@115 -- # files=1 00:19:05.448 00:31:19 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:05.448 00:31:19 -- target/dif.sh@28 -- # local sub 00:19:05.448 00:31:19 -- target/dif.sh@30 -- # for sub in "$@" 00:19:05.448 00:31:19 -- target/dif.sh@31 -- # create_subsystem 0 00:19:05.448 00:31:19 -- target/dif.sh@18 -- # local sub_id=0 00:19:05.448 00:31:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 bdev_null0 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 [2024-09-29 00:31:19.358735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@30 -- # for sub in "$@" 00:19:05.448 00:31:19 -- target/dif.sh@31 -- # create_subsystem 1 00:19:05.448 00:31:19 -- target/dif.sh@18 -- # local sub_id=1 00:19:05.448 00:31:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 bdev_null1 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:05.448 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.448 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.448 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.448 00:31:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:05.449 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.449 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.449 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.449 00:31:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:05.449 00:31:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:05.449 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:19:05.449 00:31:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:05.449 00:31:19 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:05.449 00:31:19 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:05.449 00:31:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:05.449 00:31:19 -- nvmf/common.sh@520 -- # config=() 00:19:05.449 00:31:19 -- nvmf/common.sh@520 -- # local subsystem config 00:19:05.449 00:31:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:05.449 00:31:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:05.449 { 00:19:05.449 "params": { 00:19:05.449 "name": "Nvme$subsystem", 00:19:05.449 "trtype": "$TEST_TRANSPORT", 00:19:05.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.449 "adrfam": "ipv4", 00:19:05.449 "trsvcid": "$NVMF_PORT", 00:19:05.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.449 "hdgst": ${hdgst:-false}, 00:19:05.449 "ddgst": ${ddgst:-false} 00:19:05.449 }, 00:19:05.449 "method": "bdev_nvme_attach_controller" 00:19:05.449 } 00:19:05.449 EOF 00:19:05.449 )") 00:19:05.449 00:31:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.449 00:31:19 -- target/dif.sh@82 -- # gen_fio_conf 00:19:05.449 00:31:19 -- target/dif.sh@54 -- # local file 00:19:05.449 00:31:19 -- target/dif.sh@56 -- # cat 00:19:05.449 00:31:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.449 00:31:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:05.449 00:31:19 -- nvmf/common.sh@542 -- # cat 00:19:05.449 00:31:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.449 00:31:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:05.449 00:31:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.449 00:31:19 -- common/autotest_common.sh@1320 -- # shift 00:19:05.449 00:31:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:05.449 00:31:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.449 00:31:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:05.449 00:31:19 -- target/dif.sh@72 -- # (( file <= files )) 00:19:05.449 00:31:19 -- target/dif.sh@73 -- # cat 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:05.449 00:31:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:05.449 00:31:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:05.449 { 00:19:05.449 "params": { 00:19:05.449 "name": "Nvme$subsystem", 00:19:05.449 "trtype": "$TEST_TRANSPORT", 00:19:05.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.449 "adrfam": "ipv4", 00:19:05.449 "trsvcid": "$NVMF_PORT", 00:19:05.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.449 "hdgst": ${hdgst:-false}, 00:19:05.449 "ddgst": ${ddgst:-false} 00:19:05.449 }, 00:19:05.449 "method": "bdev_nvme_attach_controller" 00:19:05.449 } 00:19:05.449 EOF 00:19:05.449 )") 00:19:05.449 00:31:19 -- nvmf/common.sh@542 -- # cat 00:19:05.449 00:31:19 -- target/dif.sh@72 -- # (( file++ )) 00:19:05.449 00:31:19 -- target/dif.sh@72 -- # (( file <= files )) 00:19:05.449 00:31:19 -- nvmf/common.sh@544 -- # jq . 00:19:05.449 00:31:19 -- nvmf/common.sh@545 -- # IFS=, 00:19:05.449 00:31:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:05.449 "params": { 00:19:05.449 "name": "Nvme0", 00:19:05.449 "trtype": "tcp", 00:19:05.449 "traddr": "10.0.0.2", 00:19:05.449 "adrfam": "ipv4", 00:19:05.449 "trsvcid": "4420", 00:19:05.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.449 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:05.449 "hdgst": false, 00:19:05.449 "ddgst": false 00:19:05.449 }, 00:19:05.449 "method": "bdev_nvme_attach_controller" 00:19:05.449 },{ 00:19:05.449 "params": { 00:19:05.449 "name": "Nvme1", 00:19:05.449 "trtype": "tcp", 00:19:05.449 "traddr": "10.0.0.2", 00:19:05.449 "adrfam": "ipv4", 00:19:05.449 "trsvcid": "4420", 00:19:05.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.449 "hdgst": false, 00:19:05.449 "ddgst": false 00:19:05.449 }, 00:19:05.449 "method": "bdev_nvme_attach_controller" 00:19:05.449 }' 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:05.449 00:31:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:05.449 00:31:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:05.449 00:31:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:05.449 00:31:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:05.449 00:31:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.449 00:31:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.449 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:05.449 ... 00:19:05.449 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:05.449 ... 00:19:05.449 fio-3.35 00:19:05.449 Starting 4 threads 00:19:05.449 [2024-09-29 00:31:19.976248] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:05.449 [2024-09-29 00:31:19.976587] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:09.643 00:19:09.643 filename0: (groupid=0, jobs=1): err= 0: pid=74795: Sun Sep 29 00:31:25 2024 00:19:09.643 read: IOPS=2292, BW=17.9MiB/s (18.8MB/s)(89.6MiB/5002msec) 00:19:09.643 slat (nsec): min=6779, max=84963, avg=12783.41, stdev=5685.83 00:19:09.643 clat (usec): min=1537, max=5997, avg=3459.75, stdev=1079.06 00:19:09.643 lat (usec): min=1566, max=6008, avg=3472.53, stdev=1078.85 00:19:09.643 clat percentiles (usec): 00:19:09.643 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2442], 00:19:09.643 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 4424], 00:19:09.643 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:09.643 | 99.00th=[ 5014], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 5145], 00:19:09.643 | 99.99th=[ 5145] 00:19:09.643 bw ( KiB/s): min=18016, max=18944, per=26.81%, avg=18333.67, stdev=290.17, samples=9 00:19:09.643 iops : min= 2252, max= 2368, avg=2291.67, stdev=36.30, samples=9 00:19:09.643 lat (msec) : 2=0.92%, 4=54.10%, 10=44.99% 00:19:09.643 cpu : usr=91.12%, sys=7.74%, ctx=4, majf=0, minf=0 00:19:09.643 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 issued rwts: total=11468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:09.643 filename0: (groupid=0, jobs=1): err= 0: pid=74796: Sun Sep 29 00:31:25 2024 00:19:09.643 read: IOPS=1693, BW=13.2MiB/s (13.9MB/s)(66.2MiB/5002msec) 00:19:09.643 slat (nsec): min=3714, max=77916, avg=12082.60, stdev=5746.99 00:19:09.643 clat (usec): min=1183, max=6255, avg=4673.79, stdev=429.50 00:19:09.643 lat (usec): min=1192, max=6265, avg=4685.88, stdev=428.45 00:19:09.643 clat percentiles (usec): 00:19:09.643 | 1.00th=[ 2409], 5.00th=[ 4424], 10.00th=[ 4490], 20.00th=[ 4621], 00:19:09.643 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4752], 00:19:09.643 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5014], 00:19:09.643 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5669], 99.95th=[ 5735], 00:19:09.643 | 99.99th=[ 6259] 00:19:09.643 bw ( KiB/s): min=13184, max=15360, per=19.84%, avg=13568.00, stdev=695.22, samples=9 00:19:09.643 iops : min= 1648, max= 1920, avg=1696.00, stdev=86.90, samples=9 00:19:09.643 lat (msec) : 2=0.35%, 4=3.34%, 10=96.31% 00:19:09.643 cpu : usr=92.12%, sys=6.92%, ctx=8, majf=0, minf=9 00:19:09.643 IO depths : 1=0.1%, 2=23.6%, 4=50.7%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 issued rwts: total=8472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:09.643 filename1: (groupid=0, jobs=1): err= 0: pid=74797: Sun Sep 29 00:31:25 2024 00:19:09.643 read: IOPS=2293, BW=17.9MiB/s (18.8MB/s)(89.6MiB/5001msec) 00:19:09.643 slat (nsec): min=7349, max=89605, avg=15760.45, stdev=5720.76 00:19:09.643 clat (usec): min=827, max=6436, avg=3452.28, stdev=1068.86 00:19:09.643 lat (usec): min=839, max=6458, avg=3468.04, stdev=1068.37 00:19:09.643 clat percentiles (usec): 00:19:09.643 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2442], 00:19:09.643 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 4424], 00:19:09.643 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:09.643 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5080], 99.95th=[ 5145], 00:19:09.643 | 99.99th=[ 5211] 00:19:09.643 bw ( KiB/s): min=18016, max=18944, per=26.81%, avg=18337.78, stdev=288.72, samples=9 00:19:09.643 iops : min= 2252, max= 2368, avg=2292.22, stdev=36.09, samples=9 00:19:09.643 lat (usec) : 1000=0.01% 00:19:09.643 lat (msec) : 2=0.88%, 4=54.13%, 10=44.98% 00:19:09.643 cpu : usr=92.06%, sys=6.80%, ctx=4, majf=0, minf=9 00:19:09.643 IO depths : 1=0.1%, 2=0.1%, 4=63.6%, 8=36.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 issued rwts: total=11469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:09.643 filename1: (groupid=0, jobs=1): err= 0: pid=74798: Sun Sep 29 00:31:25 2024 00:19:09.643 read: IOPS=2269, BW=17.7MiB/s (18.6MB/s)(88.7MiB/5001msec) 00:19:09.643 slat (nsec): min=6931, max=89610, avg=15535.06, stdev=5937.03 00:19:09.643 clat (usec): min=777, max=6179, avg=3488.87, stdev=1081.37 00:19:09.643 lat (usec): min=785, max=6215, avg=3504.40, stdev=1080.44 00:19:09.643 clat percentiles (usec): 00:19:09.643 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2442], 00:19:09.643 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2769], 60.00th=[ 4424], 00:19:09.643 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:09.643 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5145], 99.95th=[ 5604], 00:19:09.643 | 99.99th=[ 5735] 00:19:09.643 bw ( KiB/s): min=16080, max=18944, per=26.48%, avg=18108.44, stdev=810.36, samples=9 00:19:09.643 iops : min= 2010, max= 2368, avg=2263.56, stdev=101.30, samples=9 00:19:09.643 lat (usec) : 1000=0.04% 00:19:09.643 lat (msec) : 2=0.62%, 4=52.49%, 10=46.86% 00:19:09.643 cpu : usr=91.54%, sys=7.28%, ctx=30, majf=0, minf=9 00:19:09.643 IO depths : 1=0.1%, 2=0.8%, 4=63.2%, 8=36.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.643 issued rwts: total=11349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:09.643 00:19:09.643 Run status group 0 (all jobs): 00:19:09.643 READ: bw=66.8MiB/s (70.0MB/s), 13.2MiB/s-17.9MiB/s (13.9MB/s-18.8MB/s), io=334MiB (350MB), run=5001-5002msec 00:19:09.643 00:31:25 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:09.643 00:31:25 -- target/dif.sh@43 -- # local sub 00:19:09.643 00:31:25 -- target/dif.sh@45 -- # for sub in "$@" 00:19:09.643 00:31:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:09.643 00:31:25 -- target/dif.sh@36 -- # local sub_id=0 00:19:09.643 00:31:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:09.643 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.643 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.643 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.643 00:31:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:09.643 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.643 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.643 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.643 00:31:25 -- target/dif.sh@45 -- # for sub in "$@" 00:19:09.643 00:31:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:09.643 00:31:25 -- target/dif.sh@36 -- # local sub_id=1 00:19:09.643 00:31:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.643 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.643 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.643 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.643 00:31:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:09.643 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.643 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.643 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.643 00:19:09.643 real 0m23.108s 00:19:09.643 user 2m3.920s 00:19:09.643 sys 0m7.914s 00:19:09.643 00:31:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.643 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.643 ************************************ 00:19:09.643 END TEST fio_dif_rand_params 00:19:09.643 ************************************ 00:19:09.643 00:31:25 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:09.643 00:31:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:09.643 00:31:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.643 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.643 ************************************ 00:19:09.643 START TEST fio_dif_digest 00:19:09.643 ************************************ 00:19:09.643 00:31:25 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:19:09.643 00:31:25 -- target/dif.sh@123 -- # local NULL_DIF 00:19:09.643 00:31:25 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:09.643 00:31:25 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:09.643 00:31:25 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:09.643 00:31:25 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:09.643 00:31:25 -- target/dif.sh@127 -- # numjobs=3 00:19:09.643 00:31:25 -- target/dif.sh@127 -- # iodepth=3 00:19:09.643 00:31:25 -- target/dif.sh@127 -- # runtime=10 00:19:09.643 00:31:25 -- target/dif.sh@128 -- # hdgst=true 00:19:09.643 00:31:25 -- target/dif.sh@128 -- # ddgst=true 00:19:09.643 00:31:25 -- target/dif.sh@130 -- # create_subsystems 0 00:19:09.643 00:31:25 -- target/dif.sh@28 -- # local sub 00:19:09.644 00:31:25 -- target/dif.sh@30 -- # for sub in "$@" 00:19:09.644 00:31:25 -- target/dif.sh@31 -- # create_subsystem 0 00:19:09.644 00:31:25 -- target/dif.sh@18 -- # local sub_id=0 00:19:09.644 00:31:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:09.644 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.644 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.644 bdev_null0 00:19:09.644 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.644 00:31:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:09.644 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.644 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.644 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.644 00:31:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:09.644 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.644 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.644 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.644 00:31:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:09.644 00:31:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:09.644 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.644 [2024-09-29 00:31:25.389837] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.644 00:31:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:09.644 00:31:25 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:09.644 00:31:25 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:09.644 00:31:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:09.644 00:31:25 -- nvmf/common.sh@520 -- # config=() 00:19:09.644 00:31:25 -- nvmf/common.sh@520 -- # local subsystem config 00:19:09.644 00:31:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:09.644 00:31:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:09.644 00:31:25 -- target/dif.sh@82 -- # gen_fio_conf 00:19:09.644 00:31:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:09.644 { 00:19:09.644 "params": { 00:19:09.644 "name": "Nvme$subsystem", 00:19:09.644 "trtype": "$TEST_TRANSPORT", 00:19:09.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.644 "adrfam": "ipv4", 00:19:09.644 "trsvcid": "$NVMF_PORT", 00:19:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.644 "hdgst": ${hdgst:-false}, 00:19:09.644 "ddgst": ${ddgst:-false} 00:19:09.644 }, 00:19:09.644 "method": "bdev_nvme_attach_controller" 00:19:09.644 } 00:19:09.644 EOF 00:19:09.644 )") 00:19:09.644 00:31:25 -- target/dif.sh@54 -- # local file 00:19:09.644 00:31:25 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:09.644 00:31:25 -- target/dif.sh@56 -- # cat 00:19:09.644 00:31:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:09.644 00:31:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:09.644 00:31:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:09.644 00:31:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.644 00:31:25 -- nvmf/common.sh@542 -- # cat 00:19:09.644 00:31:25 -- common/autotest_common.sh@1320 -- # shift 00:19:09.644 00:31:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:09.644 00:31:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.644 00:31:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:09.644 00:31:25 -- target/dif.sh@72 -- # (( file <= files )) 00:19:09.644 00:31:25 -- nvmf/common.sh@544 -- # jq . 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.644 00:31:25 -- nvmf/common.sh@545 -- # IFS=, 00:19:09.644 00:31:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:09.644 "params": { 00:19:09.644 "name": "Nvme0", 00:19:09.644 "trtype": "tcp", 00:19:09.644 "traddr": "10.0.0.2", 00:19:09.644 "adrfam": "ipv4", 00:19:09.644 "trsvcid": "4420", 00:19:09.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:09.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:09.644 "hdgst": true, 00:19:09.644 "ddgst": true 00:19:09.644 }, 00:19:09.644 "method": "bdev_nvme_attach_controller" 00:19:09.644 }' 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:09.644 00:31:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:09.644 00:31:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:09.644 00:31:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:09.644 00:31:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:09.644 00:31:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:09.644 00:31:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:09.903 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:09.903 ... 00:19:09.903 fio-3.35 00:19:09.903 Starting 3 threads 00:19:10.161 [2024-09-29 00:31:25.941796] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:10.161 [2024-09-29 00:31:25.942040] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:22.365 00:19:22.365 filename0: (groupid=0, jobs=1): err= 0: pid=74904: Sun Sep 29 00:31:36 2024 00:19:22.365 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10008msec) 00:19:22.365 slat (nsec): min=6842, max=59358, avg=9949.12, stdev=4672.25 00:19:22.365 clat (usec): min=11557, max=14346, avg=12790.04, stdev=587.05 00:19:22.365 lat (usec): min=11565, max=14358, avg=12799.99, stdev=587.41 00:19:22.365 clat percentiles (usec): 00:19:22.365 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:19:22.365 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:19:22.365 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13829], 00:19:22.365 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14353], 99.95th=[14353], 00:19:22.365 | 99.99th=[14353] 00:19:22.365 bw ( KiB/s): min=29184, max=32256, per=33.32%, avg=29952.00, stdev=826.41, samples=20 00:19:22.365 iops : min= 228, max= 252, avg=234.00, stdev= 6.46, samples=20 00:19:22.365 lat (msec) : 20=100.00% 00:19:22.365 cpu : usr=92.01%, sys=7.35%, ctx=10, majf=0, minf=0 00:19:22.365 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.365 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.365 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:22.365 filename0: (groupid=0, jobs=1): err= 0: pid=74905: Sun Sep 29 00:31:36 2024 00:19:22.365 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10007msec) 00:19:22.365 slat (nsec): min=6873, max=78769, avg=10741.81, stdev=5485.86 00:19:22.365 clat (usec): min=8933, max=14888, avg=12786.10, stdev=604.99 00:19:22.365 lat (usec): min=8941, max=14912, avg=12796.84, stdev=605.56 00:19:22.365 clat percentiles (usec): 00:19:22.365 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:19:22.365 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:19:22.365 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:19:22.365 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:19:22.365 | 99.99th=[14877] 00:19:22.365 bw ( KiB/s): min=29184, max=32256, per=33.32%, avg=29952.00, stdev=747.52, samples=20 00:19:22.365 iops : min= 228, max= 252, avg=234.00, stdev= 5.84, samples=20 00:19:22.365 lat (msec) : 10=0.13%, 20=99.87% 00:19:22.365 cpu : usr=92.16%, sys=7.12%, ctx=12, majf=0, minf=9 00:19:22.365 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.365 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.365 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:22.365 filename0: (groupid=0, jobs=1): err= 0: pid=74906: Sun Sep 29 00:31:36 2024 00:19:22.365 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10008msec) 00:19:22.365 slat (nsec): min=6762, max=60463, avg=9709.34, stdev=3973.39 00:19:22.365 clat (usec): min=10922, max=14520, avg=12790.58, stdev=592.99 00:19:22.365 lat (usec): min=10929, max=14532, avg=12800.29, stdev=593.27 00:19:22.365 clat percentiles (usec): 00:19:22.365 | 1.00th=[11600], 5.00th=[11863], 10.00th=[11994], 20.00th=[12387], 00:19:22.365 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:19:22.365 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13829], 00:19:22.365 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:19:22.365 | 99.99th=[14484] 00:19:22.365 bw ( KiB/s): min=29184, max=32256, per=33.32%, avg=29954.90, stdev=744.49, samples=20 00:19:22.365 iops : min= 228, max= 252, avg=234.00, stdev= 5.84, samples=20 00:19:22.365 lat (msec) : 20=100.00% 00:19:22.365 cpu : usr=92.63%, sys=6.73%, ctx=89, majf=0, minf=9 00:19:22.365 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.366 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.366 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:22.366 00:19:22.366 Run status group 0 (all jobs): 00:19:22.366 READ: bw=87.8MiB/s (92.1MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=879MiB (921MB), run=10007-10008msec 00:19:22.366 00:31:36 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:22.366 00:31:36 -- target/dif.sh@43 -- # local sub 00:19:22.366 00:31:36 -- target/dif.sh@45 -- # for sub in "$@" 00:19:22.366 00:31:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:22.366 00:31:36 -- target/dif.sh@36 -- # local sub_id=0 00:19:22.366 00:31:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:22.366 00:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.366 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 00:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.366 00:31:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:22.366 00:31:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.366 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 00:31:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.366 00:19:22.366 real 0m10.901s 00:19:22.366 user 0m28.265s 00:19:22.366 sys 0m2.338s 00:19:22.366 00:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.366 ************************************ 00:19:22.366 END TEST fio_dif_digest 00:19:22.366 ************************************ 00:19:22.366 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 00:31:36 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:22.366 00:31:36 -- target/dif.sh@147 -- # nvmftestfini 00:19:22.366 00:31:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:22.366 00:31:36 -- nvmf/common.sh@116 -- # sync 00:19:22.366 00:31:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:22.366 00:31:36 -- nvmf/common.sh@119 -- # set +e 00:19:22.366 00:31:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:22.366 00:31:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:22.366 rmmod nvme_tcp 00:19:22.366 rmmod nvme_fabrics 00:19:22.366 rmmod nvme_keyring 00:19:22.366 00:31:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:22.366 00:31:36 -- nvmf/common.sh@123 -- # set -e 00:19:22.366 00:31:36 -- nvmf/common.sh@124 -- # return 0 00:19:22.366 00:31:36 -- nvmf/common.sh@477 -- # '[' -n 74133 ']' 00:19:22.366 00:31:36 -- nvmf/common.sh@478 -- # killprocess 74133 00:19:22.366 00:31:36 -- common/autotest_common.sh@926 -- # '[' -z 74133 ']' 00:19:22.366 00:31:36 -- common/autotest_common.sh@930 -- # kill -0 74133 00:19:22.366 00:31:36 -- common/autotest_common.sh@931 -- # uname 00:19:22.366 00:31:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:22.366 00:31:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74133 00:19:22.366 killing process with pid 74133 00:19:22.366 00:31:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:22.366 00:31:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:22.366 00:31:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74133' 00:19:22.366 00:31:36 -- common/autotest_common.sh@945 -- # kill 74133 00:19:22.366 00:31:36 -- common/autotest_common.sh@950 -- # wait 74133 00:19:22.366 00:31:36 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:22.366 00:31:36 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:22.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:22.366 Waiting for block devices as requested 00:19:22.366 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:22.366 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:22.366 00:31:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:22.366 00:31:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:22.366 00:31:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.366 00:31:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:22.366 00:31:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.366 00:31:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:22.366 00:31:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.366 00:31:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:22.366 ************************************ 00:19:22.366 END TEST nvmf_dif 00:19:22.366 ************************************ 00:19:22.366 00:19:22.366 real 0m59.012s 00:19:22.366 user 3m47.285s 00:19:22.366 sys 0m18.554s 00:19:22.366 00:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.366 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 00:31:37 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:22.366 00:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:22.366 00:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:22.366 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:19:22.366 ************************************ 00:19:22.366 START TEST nvmf_abort_qd_sizes 00:19:22.366 ************************************ 00:19:22.366 00:31:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:22.366 * Looking for test storage... 00:19:22.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.366 00:31:37 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:22.366 00:31:37 -- nvmf/common.sh@7 -- # uname -s 00:19:22.366 00:31:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.366 00:31:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.366 00:31:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.366 00:31:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.366 00:31:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.366 00:31:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.366 00:31:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.366 00:31:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.366 00:31:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.366 00:31:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.366 00:31:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:19:22.366 00:31:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 00:19:22.366 00:31:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.366 00:31:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.366 00:31:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.366 00:31:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.366 00:31:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.366 00:31:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.366 00:31:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.366 00:31:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.366 00:31:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.366 00:31:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.366 00:31:37 -- paths/export.sh@5 -- # export PATH 00:19:22.366 00:31:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.366 00:31:37 -- nvmf/common.sh@46 -- # : 0 00:19:22.366 00:31:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:22.366 00:31:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:22.366 00:31:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:22.366 00:31:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.366 00:31:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.366 00:31:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:22.366 00:31:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:22.366 00:31:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:22.366 00:31:37 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:22.366 00:31:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:22.366 00:31:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.366 00:31:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:22.366 00:31:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:22.366 00:31:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:22.367 00:31:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.367 00:31:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:22.367 00:31:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.367 00:31:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:22.367 00:31:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:22.367 00:31:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:22.367 00:31:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:22.367 00:31:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:22.367 00:31:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:22.367 00:31:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.367 00:31:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.367 00:31:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:22.367 00:31:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:22.367 00:31:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.367 00:31:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.367 00:31:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.367 00:31:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.367 00:31:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.367 00:31:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.367 00:31:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.367 00:31:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.367 00:31:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:22.367 00:31:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:22.367 Cannot find device "nvmf_tgt_br" 00:19:22.367 00:31:37 -- nvmf/common.sh@154 -- # true 00:19:22.367 00:31:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.367 Cannot find device "nvmf_tgt_br2" 00:19:22.367 00:31:37 -- nvmf/common.sh@155 -- # true 00:19:22.367 00:31:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:22.367 00:31:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:22.367 Cannot find device "nvmf_tgt_br" 00:19:22.367 00:31:37 -- nvmf/common.sh@157 -- # true 00:19:22.367 00:31:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:22.367 Cannot find device "nvmf_tgt_br2" 00:19:22.367 00:31:37 -- nvmf/common.sh@158 -- # true 00:19:22.367 00:31:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:22.367 00:31:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:22.367 00:31:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.367 00:31:37 -- nvmf/common.sh@161 -- # true 00:19:22.367 00:31:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.367 00:31:37 -- nvmf/common.sh@162 -- # true 00:19:22.367 00:31:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.367 00:31:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.367 00:31:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.367 00:31:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.367 00:31:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.367 00:31:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.367 00:31:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.367 00:31:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:22.367 00:31:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:22.367 00:31:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:22.367 00:31:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:22.367 00:31:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:22.367 00:31:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:22.367 00:31:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.367 00:31:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.367 00:31:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.367 00:31:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:22.367 00:31:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:22.367 00:31:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.367 00:31:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.367 00:31:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.367 00:31:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.367 00:31:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.367 00:31:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:22.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:19:22.367 00:19:22.367 --- 10.0.0.2 ping statistics --- 00:19:22.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.367 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:22.367 00:31:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:22.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:19:22.367 00:19:22.367 --- 10.0.0.3 ping statistics --- 00:19:22.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.367 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:22.367 00:31:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:19:22.367 00:19:22.367 --- 10.0.0.1 ping statistics --- 00:19:22.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.367 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:22.367 00:31:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.367 00:31:37 -- nvmf/common.sh@421 -- # return 0 00:19:22.367 00:31:37 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:22.367 00:31:37 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:22.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:22.626 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:22.885 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:22.885 00:31:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.885 00:31:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:22.885 00:31:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:22.885 00:31:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.885 00:31:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:22.885 00:31:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:22.885 00:31:38 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:22.885 00:31:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:22.885 00:31:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:22.885 00:31:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 00:31:38 -- nvmf/common.sh@469 -- # nvmfpid=75495 00:19:22.885 00:31:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:22.885 00:31:38 -- nvmf/common.sh@470 -- # waitforlisten 75495 00:19:22.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.885 00:31:38 -- common/autotest_common.sh@819 -- # '[' -z 75495 ']' 00:19:22.885 00:31:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.885 00:31:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:22.885 00:31:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.885 00:31:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:22.885 00:31:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.885 [2024-09-29 00:31:38.645370] Starting SPDK v24.01.1-pre git sha1 726a04d70 / DPDK 23.11.0 initialization... 00:19:22.885 [2024-09-29 00:31:38.645674] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.144 [2024-09-29 00:31:38.787907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.144 [2024-09-29 00:31:38.858132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.144 [2024-09-29 00:31:38.858610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.144 [2024-09-29 00:31:38.858770] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.144 [2024-09-29 00:31:38.858905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.144 [2024-09-29 00:31:38.859085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.144 [2024-09-29 00:31:38.859382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.144 [2024-09-29 00:31:38.859427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.144 [2024-09-29 00:31:38.859446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.080 00:31:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:24.080 00:31:39 -- common/autotest_common.sh@852 -- # return 0 00:19:24.080 00:31:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.080 00:31:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:24.080 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.080 00:31:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.080 00:31:39 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:24.080 00:31:39 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:24.080 00:31:39 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:24.080 00:31:39 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:24.080 00:31:39 -- scripts/common.sh@312 -- # local nvmes 00:19:24.080 00:31:39 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:24.080 00:31:39 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:24.080 00:31:39 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:24.080 00:31:39 -- scripts/common.sh@297 -- # local bdf= 00:19:24.080 00:31:39 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:24.080 00:31:39 -- scripts/common.sh@232 -- # local class 00:19:24.080 00:31:39 -- scripts/common.sh@233 -- # local subclass 00:19:24.080 00:31:39 -- scripts/common.sh@234 -- # local progif 00:19:24.080 00:31:39 -- scripts/common.sh@235 -- # printf %02x 1 00:19:24.080 00:31:39 -- scripts/common.sh@235 -- # class=01 00:19:24.080 00:31:39 -- scripts/common.sh@236 -- # printf %02x 8 00:19:24.080 00:31:39 -- scripts/common.sh@236 -- # subclass=08 00:19:24.080 00:31:39 -- scripts/common.sh@237 -- # printf %02x 2 00:19:24.080 00:31:39 -- scripts/common.sh@237 -- # progif=02 00:19:24.080 00:31:39 -- scripts/common.sh@239 -- # hash lspci 00:19:24.080 00:31:39 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:24.080 00:31:39 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:24.080 00:31:39 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:24.080 00:31:39 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:24.080 00:31:39 -- scripts/common.sh@244 -- # tr -d '"' 00:19:24.080 00:31:39 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:24.080 00:31:39 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:24.080 00:31:39 -- scripts/common.sh@15 -- # local i 00:19:24.080 00:31:39 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:24.080 00:31:39 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:24.080 00:31:39 -- scripts/common.sh@24 -- # return 0 00:19:24.080 00:31:39 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:24.080 00:31:39 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:24.080 00:31:39 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:24.080 00:31:39 -- scripts/common.sh@15 -- # local i 00:19:24.080 00:31:39 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:24.080 00:31:39 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:24.080 00:31:39 -- scripts/common.sh@24 -- # return 0 00:19:24.080 00:31:39 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:24.080 00:31:39 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:24.080 00:31:39 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:24.080 00:31:39 -- scripts/common.sh@322 -- # uname -s 00:19:24.080 00:31:39 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:24.080 00:31:39 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:24.080 00:31:39 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:24.080 00:31:39 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:24.080 00:31:39 -- scripts/common.sh@322 -- # uname -s 00:19:24.080 00:31:39 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:24.081 00:31:39 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:24.081 00:31:39 -- scripts/common.sh@327 -- # (( 2 )) 00:19:24.081 00:31:39 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:24.081 00:31:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:24.081 00:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:24.081 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 ************************************ 00:19:24.081 START TEST spdk_target_abort 00:19:24.081 ************************************ 00:19:24.081 00:31:39 -- common/autotest_common.sh@1104 -- # spdk_target 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:24.081 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.081 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 spdk_targetn1 00:19:24.081 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.081 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.081 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 [2024-09-29 00:31:39.834291] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.081 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:24.081 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.081 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:24.081 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.081 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:24.081 00:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:24.081 00:31:39 -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 [2024-09-29 00:31:39.862460] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.081 00:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:24.081 00:31:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:27.368 Initializing NVMe Controllers 00:19:27.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:27.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:27.368 Initialization complete. Launching workers. 00:19:27.368 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11121, failed: 0 00:19:27.368 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1030, failed to submit 10091 00:19:27.368 success 754, unsuccess 276, failed 0 00:19:27.368 00:31:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:27.368 00:31:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:30.656 Initializing NVMe Controllers 00:19:30.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:30.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:30.656 Initialization complete. Launching workers. 00:19:30.656 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8832, failed: 0 00:19:30.656 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1163, failed to submit 7669 00:19:30.656 success 383, unsuccess 780, failed 0 00:19:30.656 00:31:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:30.656 00:31:46 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:33.954 Initializing NVMe Controllers 00:19:33.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:33.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:33.954 Initialization complete. Launching workers. 00:19:33.954 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31318, failed: 0 00:19:33.954 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2247, failed to submit 29071 00:19:33.954 success 488, unsuccess 1759, failed 0 00:19:33.954 00:31:49 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:33.954 00:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.954 00:31:49 -- common/autotest_common.sh@10 -- # set +x 00:19:33.954 00:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:33.954 00:31:49 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:33.954 00:31:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:33.954 00:31:49 -- common/autotest_common.sh@10 -- # set +x 00:19:34.213 00:31:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.213 00:31:49 -- target/abort_qd_sizes.sh@62 -- # killprocess 75495 00:19:34.213 00:31:49 -- common/autotest_common.sh@926 -- # '[' -z 75495 ']' 00:19:34.213 00:31:49 -- common/autotest_common.sh@930 -- # kill -0 75495 00:19:34.213 00:31:49 -- common/autotest_common.sh@931 -- # uname 00:19:34.213 00:31:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.213 00:31:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75495 00:19:34.213 killing process with pid 75495 00:19:34.213 00:31:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:34.213 00:31:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:34.213 00:31:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75495' 00:19:34.213 00:31:49 -- common/autotest_common.sh@945 -- # kill 75495 00:19:34.213 00:31:49 -- common/autotest_common.sh@950 -- # wait 75495 00:19:34.213 ************************************ 00:19:34.213 END TEST spdk_target_abort 00:19:34.213 ************************************ 00:19:34.213 00:19:34.213 real 0m10.300s 00:19:34.213 user 0m42.137s 00:19:34.213 sys 0m2.041s 00:19:34.213 00:31:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.213 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:19:34.472 00:31:50 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:34.472 00:31:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:34.472 00:31:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.472 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:19:34.472 ************************************ 00:19:34.472 START TEST kernel_target_abort 00:19:34.472 ************************************ 00:19:34.472 00:31:50 -- common/autotest_common.sh@1104 -- # kernel_target 00:19:34.472 00:31:50 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:34.472 00:31:50 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:34.472 00:31:50 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:34.472 00:31:50 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:34.472 00:31:50 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:34.472 00:31:50 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:34.472 00:31:50 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:34.472 00:31:50 -- nvmf/common.sh@627 -- # local block nvme 00:19:34.472 00:31:50 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:34.472 00:31:50 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:34.472 00:31:50 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:34.472 00:31:50 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:34.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:34.730 Waiting for block devices as requested 00:19:34.730 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:34.989 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:34.989 00:31:50 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:34.989 00:31:50 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:34.989 00:31:50 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:34.989 00:31:50 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:34.989 00:31:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:34.989 No valid GPT data, bailing 00:19:34.989 00:31:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:34.989 00:31:50 -- scripts/common.sh@393 -- # pt= 00:19:34.989 00:31:50 -- scripts/common.sh@394 -- # return 1 00:19:34.989 00:31:50 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:34.989 00:31:50 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:34.989 00:31:50 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:34.989 00:31:50 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:34.989 00:31:50 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:34.989 00:31:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:34.989 No valid GPT data, bailing 00:19:34.989 00:31:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:34.989 00:31:50 -- scripts/common.sh@393 -- # pt= 00:19:34.989 00:31:50 -- scripts/common.sh@394 -- # return 1 00:19:34.989 00:31:50 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:34.989 00:31:50 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:34.989 00:31:50 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:34.989 00:31:50 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:34.989 00:31:50 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:34.989 00:31:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:35.248 No valid GPT data, bailing 00:19:35.248 00:31:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:35.248 00:31:50 -- scripts/common.sh@393 -- # pt= 00:19:35.248 00:31:50 -- scripts/common.sh@394 -- # return 1 00:19:35.248 00:31:50 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:35.248 00:31:50 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:35.248 00:31:50 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:35.248 00:31:50 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:35.248 00:31:50 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:35.248 00:31:50 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:35.248 No valid GPT data, bailing 00:19:35.248 00:31:50 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:35.248 00:31:50 -- scripts/common.sh@393 -- # pt= 00:19:35.248 00:31:50 -- scripts/common.sh@394 -- # return 1 00:19:35.248 00:31:50 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:35.248 00:31:50 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:35.248 00:31:50 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:35.248 00:31:50 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:35.248 00:31:50 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:35.248 00:31:50 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:35.248 00:31:50 -- nvmf/common.sh@654 -- # echo 1 00:19:35.248 00:31:50 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:35.248 00:31:50 -- nvmf/common.sh@656 -- # echo 1 00:19:35.248 00:31:50 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:35.248 00:31:50 -- nvmf/common.sh@663 -- # echo tcp 00:19:35.248 00:31:50 -- nvmf/common.sh@664 -- # echo 4420 00:19:35.248 00:31:50 -- nvmf/common.sh@665 -- # echo ipv4 00:19:35.248 00:31:50 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:35.248 00:31:51 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6b3816-3c6f-465f-b07b-7dcb822f2a02 --hostid=cd6b3816-3c6f-465f-b07b-7dcb822f2a02 -a 10.0.0.1 -t tcp -s 4420 00:19:35.248 00:19:35.248 Discovery Log Number of Records 2, Generation counter 2 00:19:35.249 =====Discovery Log Entry 0====== 00:19:35.249 trtype: tcp 00:19:35.249 adrfam: ipv4 00:19:35.249 subtype: current discovery subsystem 00:19:35.249 treq: not specified, sq flow control disable supported 00:19:35.249 portid: 1 00:19:35.249 trsvcid: 4420 00:19:35.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:35.249 traddr: 10.0.0.1 00:19:35.249 eflags: none 00:19:35.249 sectype: none 00:19:35.249 =====Discovery Log Entry 1====== 00:19:35.249 trtype: tcp 00:19:35.249 adrfam: ipv4 00:19:35.249 subtype: nvme subsystem 00:19:35.249 treq: not specified, sq flow control disable supported 00:19:35.249 portid: 1 00:19:35.249 trsvcid: 4420 00:19:35.249 subnqn: kernel_target 00:19:35.249 traddr: 10.0.0.1 00:19:35.249 eflags: none 00:19:35.249 sectype: none 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:35.249 00:31:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:38.532 Initializing NVMe Controllers 00:19:38.532 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:38.532 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:38.532 Initialization complete. Launching workers. 00:19:38.533 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31343, failed: 0 00:19:38.533 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31343, failed to submit 0 00:19:38.533 success 0, unsuccess 31343, failed 0 00:19:38.533 00:31:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:38.533 00:31:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:41.820 Initializing NVMe Controllers 00:19:41.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:41.820 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:41.820 Initialization complete. Launching workers. 00:19:41.820 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 62778, failed: 0 00:19:41.820 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26430, failed to submit 36348 00:19:41.820 success 0, unsuccess 26430, failed 0 00:19:41.820 00:31:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:41.820 00:31:57 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:45.141 Initializing NVMe Controllers 00:19:45.141 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:45.141 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:45.141 Initialization complete. Launching workers. 00:19:45.141 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76161, failed: 0 00:19:45.141 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19024, failed to submit 57137 00:19:45.141 success 0, unsuccess 19024, failed 0 00:19:45.141 00:32:00 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:45.141 00:32:00 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:45.141 00:32:00 -- nvmf/common.sh@677 -- # echo 0 00:19:45.142 00:32:00 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:45.142 00:32:00 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:45.142 00:32:00 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:45.142 00:32:00 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:45.142 00:32:00 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:45.142 00:32:00 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:45.142 ************************************ 00:19:45.142 END TEST kernel_target_abort 00:19:45.142 ************************************ 00:19:45.142 00:19:45.142 real 0m10.520s 00:19:45.142 user 0m5.616s 00:19:45.142 sys 0m2.394s 00:19:45.142 00:32:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.142 00:32:00 -- common/autotest_common.sh@10 -- # set +x 00:19:45.142 00:32:00 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:45.142 00:32:00 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:45.142 00:32:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:45.142 00:32:00 -- nvmf/common.sh@116 -- # sync 00:19:45.142 00:32:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:45.142 00:32:00 -- nvmf/common.sh@119 -- # set +e 00:19:45.142 00:32:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:45.142 00:32:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:45.142 rmmod nvme_tcp 00:19:45.142 rmmod nvme_fabrics 00:19:45.142 rmmod nvme_keyring 00:19:45.142 00:32:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:45.142 00:32:00 -- nvmf/common.sh@123 -- # set -e 00:19:45.142 00:32:00 -- nvmf/common.sh@124 -- # return 0 00:19:45.142 00:32:00 -- nvmf/common.sh@477 -- # '[' -n 75495 ']' 00:19:45.142 00:32:00 -- nvmf/common.sh@478 -- # killprocess 75495 00:19:45.142 00:32:00 -- common/autotest_common.sh@926 -- # '[' -z 75495 ']' 00:19:45.142 00:32:00 -- common/autotest_common.sh@930 -- # kill -0 75495 00:19:45.142 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (75495) - No such process 00:19:45.142 Process with pid 75495 is not found 00:19:45.142 00:32:00 -- common/autotest_common.sh@953 -- # echo 'Process with pid 75495 is not found' 00:19:45.142 00:32:00 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:45.142 00:32:00 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:45.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.710 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:45.710 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:45.710 00:32:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.710 00:32:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.710 00:32:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.710 00:32:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.710 00:32:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.710 00:32:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:45.710 00:32:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.969 00:32:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:45.969 ************************************ 00:19:45.969 END TEST nvmf_abort_qd_sizes 00:19:45.969 00:19:45.969 real 0m24.310s 00:19:45.969 user 0m49.131s 00:19:45.969 sys 0m5.751s 00:19:45.969 00:32:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.969 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:19:45.969 ************************************ 00:19:45.969 00:32:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:45.969 00:32:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:45.969 00:32:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:45.969 00:32:01 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:45.969 00:32:01 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:19:45.969 00:32:01 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:19:45.969 00:32:01 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:19:45.969 00:32:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:45.969 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:19:45.969 00:32:01 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:19:45.969 00:32:01 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:19:45.969 00:32:01 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:19:45.969 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:19:47.875 INFO: APP EXITING 00:19:47.875 INFO: killing all VMs 00:19:47.875 INFO: killing vhost app 00:19:47.875 INFO: EXIT DONE 00:19:48.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.443 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:48.443 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:49.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.012 Cleaning 00:19:49.012 Removing: /var/run/dpdk/spdk0/config 00:19:49.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:49.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:49.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:49.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:49.279 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:49.279 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:49.279 Removing: /var/run/dpdk/spdk1/config 00:19:49.279 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:49.279 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:49.279 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:49.279 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:49.279 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:49.279 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:49.279 Removing: /var/run/dpdk/spdk2/config 00:19:49.279 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:49.279 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:49.279 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:49.279 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:49.279 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:49.279 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:49.279 Removing: /var/run/dpdk/spdk3/config 00:19:49.279 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:49.279 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:49.279 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:49.279 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:49.279 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:49.279 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:49.279 Removing: /var/run/dpdk/spdk4/config 00:19:49.279 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:49.279 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:49.279 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:49.279 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:49.279 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:49.279 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:49.279 Removing: /dev/shm/nvmf_trace.0 00:19:49.279 Removing: /dev/shm/spdk_tgt_trace.pid53821 00:19:49.279 Removing: /var/run/dpdk/spdk0 00:19:49.279 Removing: /var/run/dpdk/spdk1 00:19:49.279 Removing: /var/run/dpdk/spdk2 00:19:49.279 Removing: /var/run/dpdk/spdk3 00:19:49.279 Removing: /var/run/dpdk/spdk4 00:19:49.279 Removing: /var/run/dpdk/spdk_pid53683 00:19:49.279 Removing: /var/run/dpdk/spdk_pid53821 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54058 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54249 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54383 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54452 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54516 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54606 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54682 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54715 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54745 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54811 00:19:49.279 Removing: /var/run/dpdk/spdk_pid54905 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55330 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55377 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55428 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55444 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55501 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55517 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55584 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55600 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55644 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55662 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55703 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55721 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55843 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55873 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55946 00:19:49.279 Removing: /var/run/dpdk/spdk_pid55998 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56017 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56081 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56095 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56135 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56149 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56178 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56203 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56232 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56246 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56286 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56300 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56335 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56354 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56383 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56403 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56437 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56451 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56486 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56505 00:19:49.279 Removing: /var/run/dpdk/spdk_pid56534 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56554 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56587 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56602 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56637 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56655 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56692 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56706 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56740 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56760 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56789 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56808 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56843 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56857 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56897 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56914 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56946 00:19:49.539 Removing: /var/run/dpdk/spdk_pid56974 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57006 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57026 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57060 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57074 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57112 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57175 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57254 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57562 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57574 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57605 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57618 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57635 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57653 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57666 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57680 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57697 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57710 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57724 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57741 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57754 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57773 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57785 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57798 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57811 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57824 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57842 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57850 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57886 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57898 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57926 00:19:49.539 Removing: /var/run/dpdk/spdk_pid57988 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58009 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58024 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58047 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58057 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58064 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58105 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58116 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58143 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58150 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58158 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58165 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58167 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58180 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58182 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58195 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58216 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58248 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58252 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58286 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58290 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58298 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58338 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58350 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58376 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58384 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58391 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58399 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58406 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58414 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58421 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58429 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58496 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58544 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58647 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58679 00:19:49.539 Removing: /var/run/dpdk/spdk_pid58723 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58743 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58762 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58772 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58807 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58816 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58884 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58898 00:19:49.799 Removing: /var/run/dpdk/spdk_pid58945 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59007 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59057 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59080 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59165 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59211 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59242 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59458 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59550 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59577 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59889 00:19:49.799 Removing: /var/run/dpdk/spdk_pid59928 00:19:49.799 Removing: /var/run/dpdk/spdk_pid60236 00:19:49.799 Removing: /var/run/dpdk/spdk_pid60637 00:19:49.799 Removing: /var/run/dpdk/spdk_pid60907 00:19:49.799 Removing: /var/run/dpdk/spdk_pid61648 00:19:49.799 Removing: /var/run/dpdk/spdk_pid62463 00:19:49.799 Removing: /var/run/dpdk/spdk_pid62580 00:19:49.799 Removing: /var/run/dpdk/spdk_pid62648 00:19:49.799 Removing: /var/run/dpdk/spdk_pid63899 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64112 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64424 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64534 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64667 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64695 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64722 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64744 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64847 00:19:49.799 Removing: /var/run/dpdk/spdk_pid64980 00:19:49.799 Removing: /var/run/dpdk/spdk_pid65131 00:19:49.799 Removing: /var/run/dpdk/spdk_pid65206 00:19:49.799 Removing: /var/run/dpdk/spdk_pid65587 00:19:49.799 Removing: /var/run/dpdk/spdk_pid65936 00:19:49.799 Removing: /var/run/dpdk/spdk_pid65944 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68128 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68134 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68409 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68424 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68444 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68469 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68475 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68564 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68566 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68678 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68681 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68789 00:19:49.799 Removing: /var/run/dpdk/spdk_pid68797 00:19:49.799 Removing: /var/run/dpdk/spdk_pid69199 00:19:49.799 Removing: /var/run/dpdk/spdk_pid69243 00:19:49.799 Removing: /var/run/dpdk/spdk_pid69352 00:19:49.799 Removing: /var/run/dpdk/spdk_pid69431 00:19:49.799 Removing: /var/run/dpdk/spdk_pid69735 00:19:49.799 Removing: /var/run/dpdk/spdk_pid69938 00:19:49.799 Removing: /var/run/dpdk/spdk_pid70316 00:19:49.799 Removing: /var/run/dpdk/spdk_pid70836 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71288 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71348 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71408 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71470 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71585 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71645 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71706 00:19:49.799 Removing: /var/run/dpdk/spdk_pid71767 00:19:49.799 Removing: /var/run/dpdk/spdk_pid72088 00:19:49.799 Removing: /var/run/dpdk/spdk_pid73261 00:19:49.799 Removing: /var/run/dpdk/spdk_pid73404 00:19:49.799 Removing: /var/run/dpdk/spdk_pid73639 00:19:49.799 Removing: /var/run/dpdk/spdk_pid74196 00:19:49.799 Removing: /var/run/dpdk/spdk_pid74354 00:19:49.799 Removing: /var/run/dpdk/spdk_pid74513 00:19:49.799 Removing: /var/run/dpdk/spdk_pid74610 00:19:49.799 Removing: /var/run/dpdk/spdk_pid74787 00:19:49.799 Removing: /var/run/dpdk/spdk_pid74896 00:19:49.799 Removing: /var/run/dpdk/spdk_pid75546 00:19:49.799 Removing: /var/run/dpdk/spdk_pid75581 00:19:49.799 Removing: /var/run/dpdk/spdk_pid75616 00:19:49.799 Removing: /var/run/dpdk/spdk_pid75865 00:19:50.058 Removing: /var/run/dpdk/spdk_pid75896 00:19:50.058 Removing: /var/run/dpdk/spdk_pid75931 00:19:50.058 Clean 00:19:50.058 killing process with pid 48030 00:19:50.059 killing process with pid 48031 00:19:50.059 00:32:05 -- common/autotest_common.sh@1436 -- # return 0 00:19:50.059 00:32:05 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:19:50.059 00:32:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:50.059 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:19:50.059 00:32:05 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:19:50.059 00:32:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:50.059 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:19:50.059 00:32:05 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:50.059 00:32:05 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:50.059 00:32:05 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:50.059 00:32:05 -- spdk/autotest.sh@394 -- # hash lcov 00:19:50.059 00:32:05 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:19:50.059 00:32:05 -- spdk/autotest.sh@396 -- # hostname 00:19:50.059 00:32:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:50.318 geninfo: WARNING: invalid characters removed from testname! 00:20:16.916 00:32:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:16.916 00:32:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:18.834 00:32:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:21.368 00:32:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.898 00:32:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:26.432 00:32:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.968 00:32:44 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:28.968 00:32:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.968 00:32:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:28.968 00:32:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.968 00:32:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.968 00:32:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.968 00:32:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.968 00:32:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.968 00:32:44 -- paths/export.sh@5 -- $ export PATH 00:20:28.968 00:32:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.968 00:32:44 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:28.968 00:32:44 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:28.968 00:32:44 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1727569964.XXXXXX 00:20:28.968 00:32:44 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1727569964.h997UH 00:20:28.968 00:32:44 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:28.968 00:32:44 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:20:28.968 00:32:44 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:28.968 00:32:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:28.968 00:32:44 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:28.968 00:32:44 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:28.968 00:32:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:20:28.968 00:32:44 -- common/autotest_common.sh@10 -- $ set +x 00:20:28.968 00:32:44 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:20:28.968 00:32:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:28.968 00:32:44 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:28.968 00:32:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:28.968 00:32:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:28.968 00:32:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:28.968 00:32:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:28.968 00:32:44 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:28.968 00:32:44 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:28.968 00:32:44 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:28.968 00:32:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:28.968 + [[ -n 5233 ]] 00:20:28.968 + sudo kill 5233 00:20:28.977 [Pipeline] } 00:20:28.993 [Pipeline] // timeout 00:20:28.998 [Pipeline] } 00:20:29.014 [Pipeline] // stage 00:20:29.020 [Pipeline] } 00:20:29.035 [Pipeline] // catchError 00:20:29.044 [Pipeline] stage 00:20:29.046 [Pipeline] { (Stop VM) 00:20:29.059 [Pipeline] sh 00:20:29.340 + vagrant halt 00:20:32.628 ==> default: Halting domain... 00:20:39.208 [Pipeline] sh 00:20:39.525 + vagrant destroy -f 00:20:42.813 ==> default: Removing domain... 00:20:42.825 [Pipeline] sh 00:20:43.107 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:43.117 [Pipeline] } 00:20:43.134 [Pipeline] // stage 00:20:43.140 [Pipeline] } 00:20:43.156 [Pipeline] // dir 00:20:43.163 [Pipeline] } 00:20:43.179 [Pipeline] // wrap 00:20:43.187 [Pipeline] } 00:20:43.202 [Pipeline] // catchError 00:20:43.213 [Pipeline] stage 00:20:43.215 [Pipeline] { (Epilogue) 00:20:43.230 [Pipeline] sh 00:20:43.511 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:48.790 [Pipeline] catchError 00:20:48.792 [Pipeline] { 00:20:48.804 [Pipeline] sh 00:20:49.088 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:49.095 Artifacts sizes are good 00:20:49.109 [Pipeline] } 00:20:49.134 [Pipeline] // catchError 00:20:49.145 [Pipeline] archiveArtifacts 00:20:49.152 Archiving artifacts 00:20:49.321 [Pipeline] cleanWs 00:20:49.331 [WS-CLEANUP] Deleting project workspace... 00:20:49.331 [WS-CLEANUP] Deferred wipeout is used... 00:20:49.336 [WS-CLEANUP] done 00:20:49.338 [Pipeline] } 00:20:49.351 [Pipeline] // stage 00:20:49.355 [Pipeline] } 00:20:49.367 [Pipeline] // node 00:20:49.372 [Pipeline] End of Pipeline 00:20:49.413 Finished: SUCCESS